{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,4]],"date-time":"2026-03-04T16:43:42Z","timestamp":1772642622396,"version":"3.50.1"},"reference-count":54,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2024,4,16]],"date-time":"2024-04-16T00:00:00Z","timestamp":1713225600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,4,16]],"date-time":"2024-04-16T00:00:00Z","timestamp":1713225600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Intell Robot Syst"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>To aid humans in everyday tasks, robots need to know which objects exist in the scene, where they are, and how to grasp and manipulate them in different situations. Therefore, object recognition and grasping are two key functionalities for autonomous robots. Most state-of-the-art approaches treat object recognition and grasping as two separate problems, even though both use visual input. Furthermore, the knowledge of the robot is fixed after the training phase. In such cases, if the robot encounters new object categories, it must be retrained to incorporate new information without catastrophic forgetting. To resolve this problem, we propose a deep learning architecture with an augmented memory capacity to handle open-ended object recognition and grasping simultaneously. In particular, our approach takes multi-views of an object as input and jointly estimates pixel-wise grasp configuration as well as a deep scale- and rotation-invariant representation as output. The obtained representation is then used for open-ended object recognition through a meta-active learning technique. We demonstrate the ability of our approach to grasp never-seen-before objects and to rapidly learn new object categories using very few examples on-site in both simulation and real-world settings. Our approach empowers a robot to acquire knowledge about new object categories using, on average, less than five instances per category and achieve<jats:inline-formula><jats:alternatives><jats:tex-math>$$95\\%$$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\"><mml:mrow><mml:mn>95<\/mml:mn><mml:mo>%<\/mml:mo><\/mml:mrow><\/mml:math><\/jats:alternatives><\/jats:inline-formula>object recognition accuracy and above<jats:inline-formula><jats:alternatives><jats:tex-math>$$91\\%$$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\"><mml:mrow><mml:mn>91<\/mml:mn><mml:mo>%<\/mml:mo><\/mml:mrow><\/mml:math><\/jats:alternatives><\/jats:inline-formula>grasp success rate on (highly) cluttered scenarios in both simulation and real-robot experiments. A video of these experiments is available online at:<jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/youtu.be\/n9SMpuEkOgk\">https:\/\/youtu.be\/n9SMpuEkOgk<\/jats:ext-link><\/jats:p>","DOI":"10.1007\/s10846-024-02092-5","type":"journal-article","created":{"date-parts":[[2024,4,16]],"date-time":"2024-04-16T08:02:01Z","timestamp":1713254521000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["Simultaneous Multi-View Object Recognition and Grasping in Open-Ended Domains"],"prefix":"10.1007","volume":"110","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9408-7730","authenticated-orcid":false,"given":"Hamidreza","family":"Kasaei","sequence":"first","affiliation":[]},{"given":"Mohammadreza","family":"Kasaei","sequence":"additional","affiliation":[]},{"given":"Georgios","family":"Tziafas","sequence":"additional","affiliation":[]},{"given":"Sha","family":"Luo","sequence":"additional","affiliation":[]},{"given":"Remo","family":"Sasso","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,4,16]]},"reference":[{"key":"2092_CR1","doi-asserted-by":"crossref","unstructured":"Wang, J., Chakraborty, R., Stella, X.Y.: Spatial transformer for 3d point clouds. IEEE Trans. Pattern Anal. Mach. Intell. (2021)","DOI":"10.1109\/TPAMI.2021.3070341"},{"key":"2092_CR2","doi-asserted-by":"crossref","unstructured":"Yu, C., Wang, J., Gao, C., Yu, G., Shen, C., Sang, N.: Context prior for scene segmentation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR), (2020)","DOI":"10.1109\/CVPR42600.2020.01243"},{"key":"2092_CR3","doi-asserted-by":"crossref","unstructured":"Fang, H.-S., Wang, C., Gou, M., Lu, C.: Graspnet-1billion: a large-scale benchmark for general object grasping. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp. 11\u00a0444\u201311\u00a0453 (2020)","DOI":"10.1109\/CVPR42600.2020.01146"},{"key":"2092_CR4","doi-asserted-by":"crossref","unstructured":"Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521\u20133526\u00a0(2017)","DOI":"10.1073\/pnas.1611835114"},{"issue":"2","key":"2092_CR5","doi-asserted-by":"publisher","first-page":"289","DOI":"10.1109\/TRO.2013.2289018","volume":"30","author":"J Bohg","year":"2013","unstructured":"Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis\u2013a survey. IEEE Trans. Rob. 30(2), 289\u2013309 (2013)","journal-title":"IEEE Trans. Rob."},{"issue":"4\u20135","key":"2092_CR6","doi-asserted-by":"publisher","first-page":"705","DOI":"10.1177\/0278364914549607","volume":"34","author":"I Lenz","year":"2015","unstructured":"Lenz, I., Lee, H., Saxena, A.: Deep learning for detecting robotic grasps. The International Journal of Robotics Research 34(4\u20135), 705\u2013724 (2015)","journal-title":"The International Journal of Robotics Research"},{"key":"2092_CR7","doi-asserted-by":"crossref","unstructured":"Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Ojea, J.A., Goldberg, K.: Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics (2017). arXiv preprint arXiv:1703.09312","DOI":"10.15607\/RSS.2017.XIII.058"},{"key":"2092_CR8","doi-asserted-by":"crossref","unstructured":"Morrison, D., Corke, P., Leitner, J.: Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. In: Processing of robotics: science and systems (RSS), (2018)","DOI":"10.15607\/RSS.2018.XIV.021"},{"key":"2092_CR9","doi-asserted-by":"crossref","unstructured":"Klokov , R., Lempitsky, V.: Escape from cells: Deep kd-networks for the recognition of 3D point cloud models. In: Proceedings of the IEEE international conference on computer vision, pp. 863\u2013872 (2017)","DOI":"10.1109\/ICCV.2017.99"},{"key":"2092_CR10","doi-asserted-by":"crossref","unstructured":"Kanezaki, A., Matsushita, Y., Nishida, Y.: RotationNet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5010\u20135019 (2018)","DOI":"10.1109\/CVPR.2018.00526"},{"key":"2092_CR11","doi-asserted-by":"crossref","unstructured":"Kumra, S., Joshi, S., Sahin, F.: Antipodal robotic grasping using generative residual convolutional neural network. In: IEEE\/RSJ International conference on intelligent robots and systems (IROS) 2020, 9626\u20139633 (2020)","DOI":"10.1109\/IROS45743.2020.9340777"},{"key":"2092_CR12","unstructured":"Breyer, M., Chung, J.J., Ott, L., Roland, S., Juan, N.: Volumetric grasping network: Real-time 6 dof grasp detection in clutter. In: Conference on robot learning, (2020)"},{"key":"2092_CR13","doi-asserted-by":"crossref","unstructured":"Mousavian, A., Eppner, C., Fox, D.: 6-dof graspnet: Variational grasp generation for object manipulation. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp. 2901\u20132910 (2019)","DOI":"10.1109\/ICCV.2019.00299"},{"key":"2092_CR14","doi-asserted-by":"crossref","unstructured":"Newbury, R., Gu, M., Chumbley, L., Mousavian, A., Eppner, C., Leitner, J., Bohg, J., Morales, A., Asfour, T., Kragic D et\u00a0al.: Deep learning approaches to grasp synthesis: A review. IEEE Trans. Robot. (2023)","DOI":"10.1109\/TRO.2023.3280597"},{"key":"2092_CR15","unstructured":"Bochkovskiy, A., Wang, C.-Y., Liao, H.-Y.M.: Yolov4: Optimal speed and accuracy of object detection (2020). arXiv preprint arXiv:2004.10934"},{"key":"2092_CR16","doi-asserted-by":"crossref","unstructured":"Bendale, A., Boult, T.E.: Towards open set deep networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1563\u20131572 (2016)","DOI":"10.1109\/CVPR.2016.173"},{"key":"2092_CR17","doi-asserted-by":"crossref","unstructured":"Subramanya, A., Pillai, V., Pirsiavash, H.: Fooling network interpretation in image classification. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp. 2020\u20132029 (2019)","DOI":"10.1109\/ICCV.2019.00211"},{"key":"2092_CR18","doi-asserted-by":"crossref","unstructured":"Da, Q., Yu, Y., Zhou, Z.-H., Learning with augmented class by exploiting unlabeled data. In: Proceedings of the AAAI conference on artificial intelligence, 28(1), 2014","DOI":"10.1609\/aaai.v28i1.8997"},{"issue":"11","key":"2092_CR19","doi-asserted-by":"publisher","first-page":"2317","DOI":"10.1109\/TPAMI.2014.2321392","volume":"36","author":"WJ Scheirer","year":"2014","unstructured":"Scheirer, W.J., Jain, L.P., Boult, T.E.: Probability models for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2317\u20132324 (2014)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"2092_CR20","unstructured":"Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3D shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912\u20131920 (2015)"},{"key":"2092_CR21","doi-asserted-by":"crossref","unstructured":"Maturana, D., Scherer, S.: VoxNet: A 3D convolutional neural network for real-time object recognition. In: 2015 IEEE\/RSJ International conference on intelligent robots and systems (IROS). IEEE, pp. 922\u2013928 (2015)","DOI":"10.1109\/IROS.2015.7353481"},{"key":"2092_CR22","doi-asserted-by":"crossref","unstructured":"Qi, C.R., Su, H., Nie\u00dfner, M., Dai, A., Yan, M., Guibas, L.J.: Volumetric and multi-view CNNs for object classification on 3D data. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5648\u20135656 (2016)","DOI":"10.1109\/CVPR.2016.609"},{"issue":"12","key":"2092_CR23","doi-asserted-by":"publisher","first-page":"2339","DOI":"10.1109\/LSP.2015.2480802","volume":"22","author":"B Shi","year":"2015","unstructured":"Shi, B., Bai, S., Zhou, Z., Bai, X.: Deeppano: Deep panoramic representation for 3-d shape recognition. IEEE Signal Process. Lett. 22(12), 2339\u20132343 (2015)","journal-title":"IEEE Signal Process. Lett."},{"key":"2092_CR24","doi-asserted-by":"crossref","unstructured":"Su, H., Maji, S., Kalogerakis, E., Learned-Miller, E.: Multi-view convolutional neural networks for 3D shape recognition. In: Proceedings of the IEEE international conference on computer vision, pp. 945\u2013953 (2015)","DOI":"10.1109\/ICCV.2015.114"},{"key":"2092_CR25","doi-asserted-by":"crossref","unstructured":"Parisotto, T., Mukherjee, S., Kasaei, H.: More: simultaneous multi-view 3d object recognition and pose estimation. Int. Serv. Robot. pp. 1\u201312 (2023)","DOI":"10.1007\/s11370-023-00468-4"},{"key":"2092_CR26","doi-asserted-by":"crossref","unstructured":"Xiong, K.H., Songsong.: Enhancing fine-grained 3d object recognition using hybrid multi-modal vision transformer-cnn models. In: 2023 IEEE\/RSJ International conference on intelligent robots and systems (IROS). IEEE, (2023)","DOI":"10.1109\/IROS55552.2023.10342235"},{"key":"2092_CR27","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s10846-021-01458-3","volume":"103","author":"SH Kasaei","year":"2021","unstructured":"Kasaei, S.H., Melsen, J., van Beers, F., Steenkist, C., Voncina, K.: The state of lifelong learning in service robots: Current bottlenecks in object perception and manipulation. Journal of Intelligent & Robotic Systems 103, 1\u201331 (2021)","journal-title":"Journal of Intelligent & Robotic Systems"},{"key":"2092_CR28","unstructured":"Sener, O., Savarese, S.: Active learning for convolutional neural networks: A core-set approach (2017). arXiv preprint arXiv:1708.00489"},{"key":"2092_CR29","doi-asserted-by":"crossref","unstructured":"Aggarwal, U., Popescu, A., Hudelot, C.: Active learning for imbalanced datasets. In: Proceedings of the IEEE\/CVF winter conference on applications of computer vision (WACV), (2020)","DOI":"10.1109\/WACV45572.2020.9093475"},{"key":"2092_CR30","doi-asserted-by":"crossref","unstructured":"Siddiqui, Y., Valentin, J., Niessner, M.: Viewal: Active learning with viewpoint entropy for semantic segmentation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR), (2020)","DOI":"10.1109\/CVPR42600.2020.00945"},{"key":"2092_CR31","unstructured":"Gal, Y., Islam, R., Ghahramani, Z.: Deep bayesian active learning with image data. In: International conference on machine learning. PMLR, pp. 1183\u20131192 (2017)"},{"key":"2092_CR32","doi-asserted-by":"crossref","unstructured":"Kasaei, S.H.O.: OrthographicNet: A deep transfer learning approach for 3D object recognition in open-ended domains. IEEE\/ASME Trans. Mechatronics, pp 1\u20131 (2020)","DOI":"10.1109\/TMECH.2020.3048433"},{"key":"2092_CR33","unstructured":"Kasaei, S.H., Tom\u00e9, A.M., Lopes, L.S.: Hierarchical object representation for open-ended object category learning and recognition. In: Advances in neural information processing systems, pp. 1948\u20131956 (2016)"},{"key":"2092_CR34","doi-asserted-by":"crossref","unstructured":"Kasaei, X.S., Hamidreza.: Lifelong ensemble learning based on multiple representations for few-shot object recognition. Robot. Auton. Syst. (2023)","DOI":"10.1016\/j.robot.2023.104615"},{"issue":"9","key":"2092_CR35","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3472291","volume":"54","author":"P Ren","year":"2021","unstructured":"Ren, P., Xiao, Y., Chang, X., Huang, P.-Y., Li, Z., Gupta, B.B., Chen, X., Wang, X.: A survey of deep active learning. ACM computing surveys (CSUR) 54(9), 1\u201340 (2021)","journal-title":"ACM computing surveys (CSUR)"},{"key":"2092_CR36","doi-asserted-by":"crossref","unstructured":"Liu, S., Li, T., Chen, W., Li, H.: Soft rasterizer: A differentiable renderer for image-based 3D reasoning. In: Proceedings of the IEEE international conference on computer vision, pp. 7708\u20137717 (2019)","DOI":"10.1109\/ICCV.2019.00780"},{"issue":"3","key":"2092_CR37","doi-asserted-by":"publisher","first-page":"52","DOI":"10.1145\/504729.504754","volume":"45","author":"S Thrun","year":"2002","unstructured":"Thrun, S.: Probabilistic robotics. Commun. ACM 45(3), 52\u201357 (2002)","journal-title":"Commun. ACM"},{"key":"2092_CR38","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly S et\u00a0al.: An image is worth 16x16 words: Transformers for image recognition at scale (2020). arXiv preprint arXiv:2010.11929"},{"key":"2092_CR39","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In.: IEEE Conference on computer vision and pattern recognition. Ieee 2009, 248\u2013255 (2009)","DOI":"10.1109\/CVPR.2009.5206848"},{"issue":"3","key":"2092_CR40","doi-asserted-by":"publisher","first-page":"261","DOI":"10.1177\/0278364917700714","volume":"36","author":"B Calli","year":"2017","unstructured":"Calli, B., Singh, A., Bruce, J., Walsman, A., Konolige, K., Srinivasa, S., Abbeel, P., Dollar, A.M.: Yale-cmu-berkeley dataset for robotic manipulation research. The International Journal of Robotics Research 36(3), 261\u2013268 (2017)","journal-title":"The International Journal of Robotics Research"},{"key":"2092_CR41","doi-asserted-by":"crossref","unstructured":"Kirkpatrick, S., Gelatt\u00a0Jr, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science, 220(4598), 671\u2013680 (1983)","DOI":"10.1126\/science.220.4598.671"},{"issue":"3\u20134","key":"2092_CR42","doi-asserted-by":"publisher","first-page":"537","DOI":"10.1007\/s10846-015-0189-z","volume":"80","author":"SH Kasaei","year":"2015","unstructured":"Kasaei, S.H., Oliveira, M., Lim, G.H., Lopes, L.S., Tom\u00e9, A.M.: Interactive open-ended learning for 3D object recognition: An approach and experiments. Journal of Intelligent & Robotic Systems 80(3\u20134), 537\u2013553 (2015)","journal-title":"Journal of Intelligent & Robotic Systems"},{"key":"2092_CR43","unstructured":"Keunecke, N., Kasaei, S.H.: Combining shape features with multiple color spaces in open-ended 3d object recognition. IEEE-RAS International conference on humanoid robots (Humanoids), (2020)"},{"key":"2092_CR44","doi-asserted-by":"crossref","unstructured":"Ji, R., Wen, L., Zhang, L., Du, D., Wu, Y., Zhao, C., Liu, X., Huang, F.: Attention convolutional binary neural tree for fine-grained visual categorization. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp. 10\u00a0468\u201310\u00a0477 (2020)","DOI":"10.1109\/CVPR42600.2020.01048"},{"issue":"4","key":"2092_CR45","doi-asserted-by":"publisher","first-page":"341","DOI":"10.1007\/s10339-011-0407-y","volume":"12","author":"A Chauhan","year":"2011","unstructured":"Chauhan, A., Lopes, L.S.: Using spoken words to guide open-ended category formation. Cogn. Process. 12(4), 341 (2011)","journal-title":"Cogn. Process."},{"key":"2092_CR46","doi-asserted-by":"crossref","unstructured":"Kasaei, S.H., Lopes, L.S., Tom\u00e9, A.M.: Coping with context change in open-ended object recognition without explicit context information. In: 2018 IEEE\/RSJ International conference on intelligent robots and systems (IROS). IEEE, pp. 1\u20137 (2018)","DOI":"10.1109\/IROS.2018.8593922"},{"key":"2092_CR47","doi-asserted-by":"crossref","unstructured":"Lai, K., Bo, L., Ren, X., Fox, D.: A large-scale hierarchical multi-view RGB-D object dataset. In: Robotics and automation (ICRA), 2011 IEEE international conference on. IEEE, pp. 1817\u20131824 (2011)","DOI":"10.1109\/ICRA.2011.5980382"},{"key":"2092_CR48","doi-asserted-by":"publisher","first-page":"151","DOI":"10.1016\/j.neucom.2018.02.066","volume":"291","author":"SH Kasaei","year":"2018","unstructured":"Kasaei, S.H., Oliveira, M., Lim, G.H., Lopes, L.S., Tom\u00e9, A.M.: Towards lifelong assistive robotics: A tight coupling between object perception and manipulation. Neurocomputing 291, 151\u2013166 (2018)","journal-title":"Neurocomputing"},{"key":"2092_CR49","unstructured":"Hoffman, M., Bach, F.R., Blei, D.M.: Online learning for latent dirichlet allocation. In: Advances in neural information processing systems, pp. 856\u2013864 (2010)"},{"key":"2092_CR50","doi-asserted-by":"crossref","unstructured":"Kasaei, S.H., Sock, J., Lopes, L.S., Tom\u00e9, A.M., Kim, T.-K.: Perceiving, learning, and recognizing 3D objects: An approach to cognitive service robots. In: Thirty-second AAAI conference on artificial intelligence, (2018)","DOI":"10.1609\/aaai.v32i1.11319"},{"key":"2092_CR51","doi-asserted-by":"crossref","unstructured":"Gualtieri, M., Ten\u00a0Pas, A., Saenko, K., Platt, R.: High precision grasp pose detection in dense clutter. In: 2016 IEEE\/RSJ International conference on intelligent robots and systems (IROS). IEEE, pp. 598\u2013605 (2016)","DOI":"10.1109\/IROS.2016.7759114"},{"key":"2092_CR52","doi-asserted-by":"crossref","unstructured":"Morrison, D., Corke, P., Leitner, J.: Learning robust, real-time, reactive robotic grasping. The International Journal of Robotics Research 39(2\u20133), 183\u2013201 (2020)","DOI":"10.1177\/0278364919859066"},{"key":"2092_CR53","unstructured":"Mokhtar, K., Heemskerk, C., Kasaei, H.: Self-supervised learning for joint pushing and grasping policies in highly cluttered environments (2022). arXiv preprint arXiv:2203.02511"},{"key":"2092_CR54","doi-asserted-by":"crossref","unstructured":"Xu, Y., Kasaei, M., Kasaei, H., Li, Z.: Instance-wise grasp synthesis for robotic grasping (2023). arXiv preprint arXiv:2302.07824","DOI":"10.1109\/ICRA48891.2023.10161149"}],"container-title":["Journal of Intelligent &amp; Robotic Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10846-024-02092-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10846-024-02092-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10846-024-02092-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,16]],"date-time":"2024-11-16T09:42:00Z","timestamp":1731750120000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10846-024-02092-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,4,16]]},"references-count":54,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2024,6]]}},"alternative-id":["2092"],"URL":"https:\/\/doi.org\/10.1007\/s10846-024-02092-5","relation":{},"ISSN":["1573-0409"],"issn-type":[{"value":"1573-0409","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,4,16]]},"assertion":[{"value":"10 February 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 March 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"16 April 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}],"article-number":"62"}}