{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,13]],"date-time":"2026-03-13T12:07:01Z","timestamp":1773403621521,"version":"3.50.1"},"reference-count":55,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2023,10,31]],"date-time":"2023-10-31T00:00:00Z","timestamp":1698710400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,10,31]],"date-time":"2023-10-31T00:00:00Z","timestamp":1698710400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100002241","name":"Japan Science and Technology Agency","doi-asserted-by":"publisher","award":["JPMJFS2123"],"award-info":[{"award-number":["JPMJFS2123"]}],"id":[{"id":"10.13039\/501100002241","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001691","name":"Japan Society for the Promotion of Science","doi-asserted-by":"publisher","award":["JP20H05957"],"award-info":[{"award-number":["JP20H05957"]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001691","name":"Japan Society for the Promotion of Science","doi-asserted-by":"publisher","award":["JP20H00603"],"award-info":[{"award-number":["JP20H00603"]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>For visual estimation of optical flow, which is crucial for various vision analyses, unsupervised learning by view synthesis has emerged as a promising alternative to supervised methods because the ground-truth flow is not readily available in many cases. However, unsupervised learning is likely to be unstable when pixel tracking is lost via occlusion and motion blur, or pixel correspondence is impaired by variations in image content and spatial structure over time. Recognizing that dynamic occlusions and object variations usually exhibit a smooth temporal transition in natural settings, we shifted our focus to model unsupervised learning optical flow from multi-frame sequences of such dynamic scenes. Specifically, we simulated various dynamic scenarios and occlusion phenomena based on Markov property, allowing the model to extract motion laws and thus gain performance in dynamic and occluded areas, which diverges from existing methods without considering temporal dynamics. In addition, we introduced a temporal dynamic model based on a well-designed spatial-temporal dual recurrent block, resulting in a lightweight model structure with fast inference speed. Assuming the temporal smoothness of optical flow, we used the prior motions of adjacent frames to supervise the occluded regions more reliably. Experiments on several optical flow benchmarks demonstrated the effectiveness of our method, as the performance is comparable to several state-of-the-art methods with advantages in memory and computational overhead.<\/jats:p>","DOI":"10.1007\/s40747-023-01266-2","type":"journal-article","created":{"date-parts":[[2023,10,31]],"date-time":"2023-10-31T09:03:07Z","timestamp":1698742987000},"page":"2215-2231","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Unsupervised learning of optical flow in a multi-frame dynamic environment using temporal dynamic modeling"],"prefix":"10.1007","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2267-421X","authenticated-orcid":false,"given":"Zitang","family":"Sun","sequence":"first","affiliation":[]},{"given":"Zhengbo","family":"Luo","sequence":"additional","affiliation":[]},{"given":"Shin\u2019ya","family":"Nishida","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,10,31]]},"reference":[{"issue":"11","key":"1266_CR1","doi-asserted-by":"publisher","first-page":"2274","DOI":"10.1109\/TPAMI.2012.120","volume":"34","author":"R Achanta","year":"2012","unstructured":"Achanta R, Shaji A, Smith K, Lucchi A, Fua P, S\u00fcsstrunk S (2012) Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 34(11):2274\u20132282","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1266_CR2","doi-asserted-by":"crossref","unstructured":"Behl A, Hosseini\u00a0Jafari O, Karthik\u00a0Mustikovela S, Abu\u00a0Alhaija H, Rother C, Geiger A (2017) Bounding boxes, segmentations and object coordinates: How important is recognition for 3d scene flow estimation in autonomous driving scenarios? In: Proceedings of the IEEE international conference on computer vision, pp. 2574\u20132583","DOI":"10.1109\/ICCV.2017.281"},{"key":"1266_CR3","doi-asserted-by":"crossref","unstructured":"Brox T, Bruhn A, Papenberg N, Weickert J (2004) High accuracy optical flow estimation based on a theory for warping. In: European conference on computer vision, pp. 25\u201336. Springer","DOI":"10.1007\/978-3-540-24673-2_3"},{"key":"1266_CR4","doi-asserted-by":"crossref","unstructured":"Butler DJ, Wulff J, Stanley GB, Black MJ (2012) A naturalistic open source movie for optical flow evaluation. In: A. Fitzgibbon et al. (Eds.) (ed.) European Conf. on Computer Vision (ECCV), Part IV, LNCS 7577, pp. 611\u2013625. Springer-Verlag","DOI":"10.1007\/978-3-642-33783-3_44"},{"key":"1266_CR5","doi-asserted-by":"crossref","unstructured":"Cho K, Van\u00a0Merri\u00ebnboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078","DOI":"10.3115\/v1\/D14-1179"},{"key":"1266_CR6","doi-asserted-by":"crossref","unstructured":"Dosovitskiy A, Fischer P, Ilg E, Hausser P, Hazirbas C, Golkov V, Van Der\u00a0Smagt P, Cremers D, Brox T (2015) Flownet: Learning optical flow with convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp. 2758\u20132766","DOI":"10.1109\/ICCV.2015.316"},{"issue":"12","key":"1266_CR7","doi-asserted-by":"publisher","first-page":"2496","DOI":"10.1109\/TPAMI.2016.2646685","volume":"39","author":"O Freifeld","year":"2017","unstructured":"Freifeld O, Hauberg S, Batmanghelich K, Fisher JW (2017) Transformations based on continuous piecewise-affine velocity fields. IEEE Trans Pattern Anal Mach Intell 39(12):2496\u20132509","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1266_CR8","doi-asserted-by":"crossref","unstructured":"Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In: Conference on computer vision and pattern recognition (CVPR)","DOI":"10.1109\/CVPR.2012.6248074"},{"key":"1266_CR9","doi-asserted-by":"crossref","unstructured":"Godet P, Boulch A, Plyer A, Le\u00a0Besnerais G (2021) Starflow: A spatiotemporal recurrent cell for lightweight multi-frame optical flow estimation. In: 2020 25th International conference on pattern recognition (ICPR), pp. 2462\u20132469. IEEE","DOI":"10.1109\/ICPR48806.2021.9412269"},{"key":"1266_CR10","doi-asserted-by":"crossref","unstructured":"Guan S, Li H, Zheng WS (2019) Unsupervised learning for optical flow estimation using pyramid convolution lstm. In: 2019 IEEE international conference on multimedia and expo (ICME), pp. 181\u2013186. IEEE","DOI":"10.1109\/ICME.2019.00039"},{"key":"1266_CR11","doi-asserted-by":"crossref","unstructured":"Hui TW, Tang X, Loy CC (2018) Liteflownet: A lightweight convolutional neural network for optical flow estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8981\u20138989","DOI":"10.1109\/CVPR.2018.00936"},{"key":"1266_CR12","doi-asserted-by":"crossref","unstructured":"Hui TW, Tang X, Loy CC (2018) Liteflownet: A lightweight convolutional neural network for optical flow estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8981\u20138989","DOI":"10.1109\/CVPR.2018.00936"},{"key":"1266_CR13","doi-asserted-by":"crossref","unstructured":"Hur J, Roth S (2019) Iterative residual refinement for joint optical flow and occlusion estimation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp. 5754\u20135763","DOI":"10.1109\/CVPR.2019.00590"},{"key":"1266_CR14","doi-asserted-by":"crossref","unstructured":"Ilg E, Mayer N, Saikia T, Keuper M, Dosovitskiy A, Brox T (2017) Flownet 2.0: Evolution of optical flow estimation with deep networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2462\u20132470","DOI":"10.1109\/CVPR.2017.179"},{"key":"1266_CR15","doi-asserted-by":"crossref","unstructured":"Im W, Kim TK, Yoon SE (2020) Unsupervised learning of optical flow with deep feature similarity. In: European conference on computer vision, pp. 172\u2013188. Springer","DOI":"10.1007\/978-3-030-58586-0_11"},{"issue":"8","key":"1266_CR16","doi-asserted-by":"publisher","first-page":"1443","DOI":"10.1109\/TIP.2008.925381","volume":"17","author":"S Ince","year":"2008","unstructured":"Ince S, Konrad J (2008) Occlusion-aware optical flow estimation. IEEE Trans Image Process 17(8):1443\u20131451","journal-title":"IEEE Trans Image Process"},{"key":"1266_CR17","doi-asserted-by":"crossref","unstructured":"Janai J, Guney F, Ranjan A, Black M, Geiger A (2018) Unsupervised learning of multi-frame optical flow with occlusions. In: Proceedings of the European conference on computer vision (ECCV), pp. 690\u2013706","DOI":"10.1007\/978-3-030-01270-0_42"},{"key":"1266_CR18","doi-asserted-by":"crossref","unstructured":"Janai J, Guney F, Ranjan A, Black M, Geiger A (2018) Unsupervised learning of multi-frame optical flow with occlusions. In: Proceedings of the European conference on computer vision (ECCV), pp. 690\u2013706","DOI":"10.1007\/978-3-030-01270-0_42"},{"key":"1266_CR19","doi-asserted-by":"crossref","unstructured":"Jiang H, Sun D, Jampani V, Yang MH, Learned-Miller E, Kautz J (2018) Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9000\u20139008","DOI":"10.1109\/CVPR.2018.00938"},{"key":"1266_CR20","doi-asserted-by":"crossref","unstructured":"Jonschkowski R, Stone A, Barron JT, Gordon A, Konolige K, Angelova A (2020) What matters in unsupervised optical flow. In: European conference on computer vision, pp. 557\u2013572. Springer","DOI":"10.1007\/978-3-030-58536-5_33"},{"key":"1266_CR21","unstructured":"Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980"},{"key":"1266_CR22","doi-asserted-by":"crossref","unstructured":"Liu L, Zhai G, Ye W, Liu Y (2019) Unsupervised learning of scene flow estimation fusing with local rigidity. In: IJCAI, pp. 876\u2013882","DOI":"10.24963\/ijcai.2019\/123"},{"key":"1266_CR23","doi-asserted-by":"crossref","unstructured":"Liu L, Zhang J, He R, Liu Y, Wang Y, Tai Y, Luo D, Wang C, Li J, Huang F (2020) Learning by analogy: Reliable supervision from transformations for unsupervised optical flow estimation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp. 6489\u20136498","DOI":"10.1109\/CVPR42600.2020.00652"},{"key":"1266_CR24","doi-asserted-by":"crossref","unstructured":"Liu P, King I, Lyu MR, Xu J (2019) Ddflow: Learning optical flow with unlabeled data distillation. In: Proceedings of the AAAI conference on artificial intelligence, vol.\u00a033, pp. 8770\u20138777","DOI":"10.1609\/aaai.v33i01.33018770"},{"key":"1266_CR25","doi-asserted-by":"crossref","unstructured":"Liu P, King I, Lyu MR, Xu J (2019) Ddflow: Learning optical flow with unlabeled data distillation. In: Proceedings of the AAAI conference on artificial intelligence, vol.\u00a033, pp. 8770\u20138777","DOI":"10.1609\/aaai.v33i01.33018770"},{"key":"1266_CR26","doi-asserted-by":"crossref","unstructured":"Liu P, Lyu M, King I, Xu J (2019) Selflow: Self-supervised learning of optical flow. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 4571\u20134580","DOI":"10.1109\/CVPR.2019.00470"},{"key":"1266_CR27","doi-asserted-by":"publisher","first-page":"6420","DOI":"10.1109\/TIP.2021.3093781","volume":"30","author":"S Liu","year":"2021","unstructured":"Liu S, Luo K, Ye N, Wang C, Wang J, Zeng B (2021) Oiflow: occlusion-inpainting optical flow estimation by unsupervised learning. IEEE Trans Image Process 30:6420\u20136433","journal-title":"IEEE Trans Image Process"},{"key":"1266_CR28","unstructured":"Lotter W, Kreiman G, Cox D (2016) Deep predictive coding networks for video prediction and unsupervised learning. arXiv preprint arXiv:1605.08104"},{"key":"1266_CR29","doi-asserted-by":"crossref","unstructured":"Luo K, Wang C, Liu S, Fan H, Wang J, Sun J (2021) Upflow: Upsampling pyramid for unsupervised optical flow learning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp. 1045\u20131054","DOI":"10.1109\/CVPR46437.2021.00110"},{"issue":"9","key":"1266_CR30","doi-asserted-by":"publisher","first-page":"942","DOI":"10.1007\/s11263-018-1082-6","volume":"126","author":"N Mayer","year":"2018","unstructured":"Mayer N, Ilg E, Fischer P, Hazirbas C, Cremers D, Dosovitskiy A, Brox T (2018) What makes good synthetic training data for learning disparity and optical flow estimation? Int J Comput Vis 126(9):942\u2013960","journal-title":"Int J Comput Vis"},{"key":"1266_CR31","doi-asserted-by":"crossref","unstructured":"Meister S, Hur J, Roth S (2018) Unflow: Unsupervised learning of optical flow with a bidirectional census loss. In: Thirty-second AAAI conference on artificial intelligence","DOI":"10.1609\/aaai.v32i1.12276"},{"key":"1266_CR32","doi-asserted-by":"crossref","unstructured":"Menze M, Heipke C, Geiger A (2015) Joint 3d estimation of vehicles and scene flow. In: ISPRS workshop on image sequence analysis (ISA)","DOI":"10.5194\/isprsannals-II-3-W5-427-2015"},{"key":"1266_CR33","doi-asserted-by":"crossref","unstructured":"Neoral M, \u0160ochman J, Matas J (2018) Continual occlusion and optical flow estimation. In: Asian conference on computer vision, pp. 159\u2013174. Springer","DOI":"10.1007\/978-3-030-20870-7_10"},{"key":"1266_CR34","doi-asserted-by":"crossref","unstructured":"Ranjan A, Black MJ (2017) Optical flow estimation using a spatial pyramid network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4161\u20134170","DOI":"10.1109\/CVPR.2017.291"},{"issue":"1","key":"1266_CR35","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1038\/4580","volume":"2","author":"RP Rao","year":"1999","unstructured":"Rao RP, Ballard DH (1999) Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci 2(1):79\u201387","journal-title":"Nat Neurosci"},{"key":"1266_CR36","doi-asserted-by":"crossref","unstructured":"Revaud J, Weinzaepfel P, Harchaoui Z, Schmid C (2015) Epicflow: Edge-preserving interpolation of correspondences for optical flow. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1164\u20131172","DOI":"10.1109\/CVPR.2015.7298720"},{"key":"1266_CR37","doi-asserted-by":"crossref","unstructured":"Shi H, Zhou Y, Yang K, Yin X, Wang K (2022) Csflow: Learning optical flow via cross strip correlation for autonomous driving. In: 2022 IEEE intelligent vehicles symposium (IV), pp. 1851\u20131858. IEEE","DOI":"10.1109\/IV51971.2022.9827341"},{"key":"1266_CR38","unstructured":"Simonyan K, Zisserman A (2014) Two-stream convolutional networks for action recognition in videos. Advances in neural information processing systems 27"},{"key":"1266_CR39","doi-asserted-by":"crossref","unstructured":"Stone A, Maurer D, Ayvaci A, Angelova A, Jonschkowski R (2021) Smurf: Self-teaching multi-frame unsupervised raft with full-image warping. In: Proceedings of the IEEE\/CVF conference on Computer Vision and Pattern Recognition, pp. 3887\u20133896","DOI":"10.1109\/CVPR46437.2021.00388"},{"issue":"9","key":"1266_CR40","doi-asserted-by":"publisher","first-page":"1993","DOI":"10.1167\/jov.21.9.1993","volume":"21","author":"K Storrs","year":"2021","unstructured":"Storrs K, Fleming R (2021) Learning to see material from motion by predicting videos. J Vis 21(9):1993\u20131993","journal-title":"J Vis"},{"key":"1266_CR41","doi-asserted-by":"crossref","unstructured":"Sun D, Yang X, Liu MY, Kautz J (2018) Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8934\u20138943","DOI":"10.1109\/CVPR.2018.00931"},{"key":"1266_CR42","doi-asserted-by":"publisher","first-page":"133","DOI":"10.1016\/j.neucom.2023.03.012","volume":"534","author":"Z Sun","year":"2023","unstructured":"Sun Z, Luo Z, Nishida S (2023) Decoupled spatiotemporal adaptive fusion network for self-supervised motion estimation. Neurocomputing 534:133\u2013146","journal-title":"Neurocomputing"},{"key":"1266_CR43","doi-asserted-by":"crossref","unstructured":"Teed Z, Deng J (2020) Raft: Recurrent all-pairs field transforms for optical flow. In: European conference on computer vision, pp. 402\u2013419. Springer","DOI":"10.1007\/978-3-030-58536-5_24"},{"key":"1266_CR44","doi-asserted-by":"publisher","first-page":"8429","DOI":"10.1109\/TIP.2020.3013168","volume":"29","author":"L Tian","year":"2020","unstructured":"Tian L, Tu Z, Zhang D, Liu J, Li B, Yuan J (2020) Unsupervised learning of optical flow with cnn-based non-local filtering. IEEE Trans Image Process 29:8429\u20138442","journal-title":"IEEE Trans Image Process"},{"key":"1266_CR45","doi-asserted-by":"crossref","unstructured":"Tomasi C, Manduchi R (1998) Bilateral filtering for gray and color images. In: Sixth international conference on computer vision (IEEE Cat. No. 98CH36271), pp. 839\u2013846. IEEE","DOI":"10.1109\/ICCV.1998.710815"},{"issue":"11","key":"1266_CR46","doi-asserted-by":"publisher","first-page":"20850","DOI":"10.1109\/TITS.2022.3182858","volume":"23","author":"G Wang","year":"2022","unstructured":"Wang G, Ren S, Wang H (2022) Unsupervised learning of optical flow with non-occlusion from geometry. IEEE Trans Intell Trans Syst 23(11):20850\u201320859","journal-title":"IEEE Trans Intell Trans Syst"},{"issue":"1","key":"1266_CR47","doi-asserted-by":"publisher","first-page":"308","DOI":"10.1109\/TITS.2020.3010418","volume":"23","author":"G Wang","year":"2020","unstructured":"Wang G, Zhang C, Wang H, Wang J, Wang Y, Wang X (2020) Unsupervised learning of depth, optical flow and pose with occlusion from 3d geometry. IEEE Trans Intell Trans Syst 23(1):308\u2013320","journal-title":"IEEE Trans Intell Trans Syst"},{"key":"1266_CR48","doi-asserted-by":"crossref","unstructured":"Wang Y, Wang P, Yang Z, Luo C, Yang Y, Xu W (2019) Unos: Unified unsupervised optical-flow and stereo-depth estimation by watching videos. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp. 8071\u20138081","DOI":"10.1109\/CVPR.2019.00826"},{"key":"1266_CR49","doi-asserted-by":"crossref","unstructured":"Wang Y, Yang Y, Yang Z, Zhao L, Wang P, Xu W (2018) Occlusion aware unsupervised learning of optical flow. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4884\u20134893","DOI":"10.1109\/CVPR.2018.00513"},{"key":"1266_CR50","doi-asserted-by":"crossref","unstructured":"Yin Z, Shi J (2018) Geonet: Unsupervised learning of dense depth, optical flow and camera pose. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1983\u20131992","DOI":"10.1109\/CVPR.2018.00212"},{"key":"1266_CR51","doi-asserted-by":"crossref","unstructured":"Yu JJ, Harley AW, Derpanis KG (2016) Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness. In: European Conference on Computer Vision, pp. 3\u201310. Springer","DOI":"10.1007\/978-3-319-49409-8_1"},{"key":"1266_CR52","doi-asserted-by":"crossref","unstructured":"Yuan S, Sun X, Kim H, Yu S, Tomasi C (2022) Optical flow training under limited label budget via active learning. In: European conference on computer vision, pp. 410\u2013427. Springer","DOI":"10.1007\/978-3-031-20047-2_24"},{"key":"1266_CR53","doi-asserted-by":"crossref","unstructured":"Zabih R, Woodfill J (1994) Non-parametric local transforms for computing visual correspondence. In: Computer Vision-ECCV\u201994: Third European Conference on Computer Vision Stockholm, Sweden, May 2\u20136 1994 Proceedings, Volume II 3, pp. 151\u2013158. Springer","DOI":"10.1007\/BFb0028345"},{"key":"1266_CR54","doi-asserted-by":"crossref","unstructured":"Zhong Y, Ji P, Wang J, Dai Y, Li H (2019) Unsupervised deep epipolar flow for stationary or dynamic scenes. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 12095\u201312104","DOI":"10.1109\/CVPR.2019.01237"},{"key":"1266_CR55","doi-asserted-by":"crossref","unstructured":"Zou Y, Luo Z, Huang JB (2018) Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. In: Proceedings of the European conference on computer vision (ECCV), pp. 36\u201353","DOI":"10.1007\/978-3-030-01228-1_3"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01266-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01266-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01266-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,1]],"date-time":"2024-11-01T03:20:49Z","timestamp":1730431249000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01266-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,31]]},"references-count":55,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,4]]}},"alternative-id":["1266"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01266-2","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,31]]},"assertion":[{"value":"19 May 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 October 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"31 October 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all the authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}