{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,26]],"date-time":"2025-11-26T08:07:38Z","timestamp":1764144458086,"version":"3.46.0"},"reference-count":52,"publisher":"Springer Science and Business Media LLC","issue":"16","license":[{"start":{"date-parts":[[2025,10,23]],"date-time":"2025-10-23T00:00:00Z","timestamp":1761177600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,10,23]],"date-time":"2025-10-23T00:00:00Z","timestamp":1761177600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001691","name":"Japan Society for the Promotion of Science","doi-asserted-by":"publisher","award":["23K18508"],"award-info":[{"award-number":["23K18508"]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Appl Intell"],"published-print":{"date-parts":[[2025,11]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Human activity recognition (HAR) research has recently focused on multiple individuals within videos. However, conventional models are trained using supervised or semi-supervised learning, which makes their direct application to real-world videos challenging. The purpose of this study is to achieve HAR from real-world videos through completely unsupervised learning. As real-world videos, we target operating room surveillance videos with surgery ongoing. We extract visual features using two autoencoders based on Inception 3D (I3D) and spatial features measured by the L2 norm between the operating table and individuals. These individual features were clustered using a centroid-based model. We evaluated our method on 29 pieces of different operating room videos from 6 different operating rooms in 145 seconds in total, and achieved 0.83 and 0.71 of accuracy in the training and test datasets, respectively, for individual clustering. Our method allows automatic analysis of operating room videos, which contributes to improving the efficiency and effectiveness of postoperative analysis and further medical education.<\/jats:p>","DOI":"10.1007\/s10489-025-06917-0","type":"journal-article","created":{"date-parts":[[2025,10,23]],"date-time":"2025-10-23T14:14:55Z","timestamp":1761228895000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Spatio-temporal unsupervised individual clustering for operating room videos"],"prefix":"10.1007","volume":"55","author":[{"ORCID":"https:\/\/orcid.org\/0009-0001-5389-5026","authenticated-orcid":false,"given":"Koji","family":"Yokoyama","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2014-7195","authenticated-orcid":false,"given":"Goshiro","family":"Yamamoto","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0515-5226","authenticated-orcid":false,"given":"Chang","family":"Liu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0000-8489-3683","authenticated-orcid":false,"given":"Sho","family":"Mitarai","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1674-6363","authenticated-orcid":false,"given":"Kazumasa","family":"Kishimoto","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9486-9505","authenticated-orcid":false,"given":"Yukiko","family":"Mori","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1472-7203","authenticated-orcid":false,"given":"Tomohiro","family":"Kuroda","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,10,23]]},"reference":[{"key":"6917_CR1","doi-asserted-by":"crossref","unstructured":"Gavrilyuk K, Sanford R, Javan M, Snoek CGM (2020) Actor-transformers for group activity recognition. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR)","DOI":"10.1109\/CVPR42600.2020.00092"},{"key":"6917_CR2","doi-asserted-by":"crossref","unstructured":"Yuan H, Ni D, Wang M (2021) Spatio-temporal dynamic inference network for group activity recognition. In: Proceedings of the IEEE\/CVF international conference on computer vision (ICCV), pp 7476\u20137485","DOI":"10.1109\/ICCV48922.2021.00738"},{"issue":"6","key":"6917_CR3","doi-asserted-by":"publisher","first-page":"6955","DOI":"10.1109\/TPAMI.2020.3034233","volume":"45","author":"R Yan","year":"2023","unstructured":"Yan R, Xie L, Tang J, Shu X, Tian Q (2023) Higcin: hierarchical graph-based cross inference network for group activity recognition. IEEE Trans Pattern Anal Mach Intell 45(6):6955\u20136968. https:\/\/doi.org\/10.1109\/TPAMI.2020.3034233","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"12","key":"6917_CR4","doi-asserted-by":"publisher","first-page":"7267","DOI":"10.1109\/TCSVT.2023.3278984","volume":"33","author":"L Kong","year":"2023","unstructured":"Kong L, Zhou W, Pei D, He Z, Huang D (2023) Group activity representation learning with long-short states predictive transformer. IEEE Trans Circuits Syst Video Technol 33(12):7267\u20137281. https:\/\/doi.org\/10.1109\/TCSVT.2023.3278984","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"6917_CR5","doi-asserted-by":"crossref","unstructured":"Costa AdS (2017) Assessment of operative times of multiple surgical specialties in a public university hospital. Einstein (S\u00e3o Paulo)[online] 15(2):200\u2013205","DOI":"10.1590\/s1679-45082017gs3902"},{"key":"6917_CR6","doi-asserted-by":"crossref","unstructured":"Amer MR, Lei P, Todorovic S (2014) Hirf: Hierarchical random field for collective activity recognition in videos. In: Computer Vision \u2013 ECCV 2014, pp 572\u2013585. Springer, Cham","DOI":"10.1007\/978-3-319-10599-4_37"},{"issue":"6","key":"6917_CR7","doi-asserted-by":"publisher","first-page":"1242","DOI":"10.1109\/TPAMI.2013.220","volume":"36","author":"W Choi","year":"2014","unstructured":"Choi W, Savarese S (2014) Understanding collective activities of people from videos. IEEE Trans Pattern Anal Mach Intell 36(6):1242\u20131257. https:\/\/doi.org\/10.1109\/TPAMI.2013.220","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"6917_CR8","doi-asserted-by":"publisher","unstructured":"Choi W, Shahid K, Savarese S (2011) Learning context for collective activity recognition. In: CVPR 2011, pp 3273\u20133280. https:\/\/doi.org\/10.1109\/CVPR.2011.5995707","DOI":"10.1109\/CVPR.2011.5995707"},{"key":"6917_CR9","doi-asserted-by":"crossref","unstructured":"Choi W, Savarese S (2012) A unified framework for multi-target tracking and collective activity recognition. In: Computer vision \u2013 ECCV 2012, pp 215\u2013230. Springer, Berlin, Heidelberg","DOI":"10.1007\/978-3-642-33765-9_16"},{"key":"6917_CR10","doi-asserted-by":"crossref","unstructured":"Hajimirsadeghi H, Yan W, Vahdat A, Mori G (2015) Visual recognition by counting instances: A multi-instance cardinality potential kernel. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)","DOI":"10.1109\/CVPR.2015.7298875"},{"key":"6917_CR11","doi-asserted-by":"crossref","unstructured":"Lan T, Sigal L, Mori G (2012) Social roles in hierarchical models for human activity recognition. In: 2012 IEEE Conference on computer vision and pattern recognition, pp 1354\u20131361","DOI":"10.1109\/CVPR.2012.6247821"},{"issue":"8","key":"6917_CR12","doi-asserted-by":"publisher","first-page":"1549","DOI":"10.1109\/TPAMI.2011.228","volume":"34","author":"T Lan","year":"2012","unstructured":"Lan T, Wang Y, Yang W, Robinovitch SN, Mori G (2012) Discriminative latent models for recognizing contextual group activities. IEEE Trans Pattern Anal Mach Intell 34(8):1549\u20131562. https:\/\/doi.org\/10.1109\/TPAMI.2011.228","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"6917_CR13","doi-asserted-by":"publisher","unstructured":"Choi W, Shahid K, Savarese S (2009) What are they doing? : Collective activity classification using spatio-temporal relationship among people. In: 2009 IEEE 12th International conference on computer vision workshops, ICCV Workshops, pp 1282\u20131289. https:\/\/doi.org\/10.1109\/ICCVW.2009.5457461","DOI":"10.1109\/ICCVW.2009.5457461"},{"key":"6917_CR14","doi-asserted-by":"crossref","unstructured":"Ibrahim MS, Muralidharan S, Deng Z, Vahdat A, Mori G (2016) A hierarchical deep temporal model for group activity recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)","DOI":"10.1109\/CVPR.2016.217"},{"key":"6917_CR15","doi-asserted-by":"crossref","unstructured":"Wang M, Ni B, Yang X (2017) Recurrent modeling of interaction context for collective activity recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)","DOI":"10.1109\/CVPR.2017.783"},{"key":"6917_CR16","doi-asserted-by":"publisher","unstructured":"Kong L, Qin J, Huang D, Wang Y, Van\u00a0Gool L (2018) Hierarchical attention and context modeling for group activity recognition. In: 2018 IEEE International conference on acoustics, speech and signal processing (ICASSP), pp 1328\u20131332. https:\/\/doi.org\/10.1109\/ICASSP.2018.8461770","DOI":"10.1109\/ICASSP.2018.8461770"},{"key":"6917_CR17","doi-asserted-by":"crossref","unstructured":"Qi M, Qin J, Li A, Wang Y, Luo J, Van\u00a0Gool L (2018) stagnet: An attentive semantic rnn for group activity recognition. In: Proceedings of the european conference on computer vision (ECCV)","DOI":"10.1007\/978-3-030-01249-6_7"},{"key":"6917_CR18","doi-asserted-by":"crossref","unstructured":"Azar SM, Atigh MG, Nickabadi A, Alahi A (2019) Convolutional relational machine for group activity recognition. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR)","DOI":"10.1109\/CVPR.2019.00808"},{"key":"6917_CR19","doi-asserted-by":"crossref","unstructured":"Wu J, Wang L, Wang L, Guo J, Wu G (2019) Learning actor relation graphs for group activity recognition. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR)","DOI":"10.1109\/CVPR.2019.01020"},{"key":"6917_CR20","doi-asserted-by":"crossref","unstructured":"Li Z, Chang X, Li Y, Su J (2025) Skeleton-based group activity recognition via spatial-temporal panoramic graph. In: Leonardis A, Ricci E, Roth S, Russakovsky O, Sattler T, Varol G (eds) Computer Vision \u2013 ECCV 2024, pp 252\u2013269. Springer, Cham","DOI":"10.1007\/978-3-031-73202-7_15"},{"key":"6917_CR21","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2024.108412","volume":"133","author":"X Jiang","year":"2024","unstructured":"Jiang X, Qing L, Huang J, Guo L, Peng Y (2024) Unveiling group activity recognition: leveraging local\u2013global context-aware graph reasoning for enhanced actor\u2013scene interactions. Eng Appl Artif Intell 133:108412. https:\/\/doi.org\/10.1016\/j.engappai.2024.108412","journal-title":"Eng Appl Artif Intell"},{"key":"6917_CR22","doi-asserted-by":"crossref","unstructured":"Carreira J, Zisserman A (2018) Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset","DOI":"10.1109\/CVPR.2017.502"},{"issue":"4","key":"6917_CR23","doi-asserted-by":"publisher","first-page":"3261","DOI":"10.1609\/aaai.v35i4.16437","volume":"35","author":"H Yuan","year":"2021","unstructured":"Yuan H, Ni D (2021) Learning visual context for group activity recognition. Proc AAAI Conf Artif Intell 35(4):3261\u20133269. https:\/\/doi.org\/10.1609\/aaai.v35i4.16437","journal-title":"Proc AAAI Conf Artif Intell"},{"key":"6917_CR24","doi-asserted-by":"crossref","unstructured":"Li S, Cao Q, Liu L, Yang K, Liu S, Hou J, Yi S (2021) Groupformer: Group activity recognition with clustered spatial-temporal transformer. In: Proceedings of the IEEE\/CVF international conference on computer vision (ICCV), pp 13668\u201313677","DOI":"10.1109\/ICCV48922.2021.01341"},{"key":"6917_CR25","doi-asserted-by":"crossref","unstructured":"Nakatani C, Kawashima H, Ukita N (2024) Learning group activity features through person attribute prediction. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 18233\u201318242","DOI":"10.1109\/CVPR52733.2024.01726"},{"key":"6917_CR26","doi-asserted-by":"publisher","first-page":"215","DOI":"10.1016\/j.media.2016.07.001","volume":"35","author":"A Kadkhodamohammadi","year":"2017","unstructured":"Kadkhodamohammadi A, Gangi A, Mathelin M, Padoy N (2017) Articulated clinician detection using 3d pictorial structures on rgb-d data. Med Image Anal 35:215\u2013224","journal-title":"Med Image Anal"},{"key":"6917_CR27","doi-asserted-by":"crossref","unstructured":"Kadkhodamohammadi A, Gangi A, Mathelin M, Padoy N (2017) A multi-view rgb-d approach for human pose estimation in operating rooms. In: 2017 IEEE Winter conference on applications of computer vision (WACV), pp 363\u2013372","DOI":"10.1109\/WACV.2017.47"},{"key":"6917_CR28","unstructured":"Srivastav V, Issenhuth T, Abdolrahim K, Mathelin M, Gangi A, Padoy N (2018) Mvor: A multi-view rgb-d operating room dataset for 2d and 3d human pose estimation. In: Medical image computing and computer assisted intervention \u2013 MICCAI 2018"},{"key":"6917_CR29","doi-asserted-by":"crossref","unstructured":"Yokoyama K, Yamamoto G, Liu C, Kishimoto K, Mori Y, Kuroda T (2024) Individual activity anomaly estimation in operating rooms based on time-sequential prediction. In: MEDINFO 2023\u2014The Future Is Accessible, pp 284\u2013288","DOI":"10.3233\/SHTI230972"},{"key":"6917_CR30","doi-asserted-by":"publisher","unstructured":"Yokoyama K, Yamamoto G, Liu C, Kishimoto K, Kuroda T (2023) Operating room surveillance video analysis for group activity recognition. Adv Biomed Eng 12:171\u2013181. https:\/\/doi.org\/10.14326\/abe.12.171","DOI":"10.14326\/abe.12.171"},{"key":"6917_CR31","doi-asserted-by":"publisher","unstructured":"Yokoyama K, Yamamoto G, Liu C, Sugiyama O, Santos LH, Kuroda T (2022) Recognition of instrument passing and group attention for understanding intraoperative state of surgical team. Adv Biomed Eng 11:37\u201347. https:\/\/doi.org\/10.14326\/abe.11.37","DOI":"10.14326\/abe.11.37"},{"key":"6917_CR32","doi-asserted-by":"publisher","unstructured":"Huang P, Huang Y, Wang W, Wang L (2014) Deep embedding network for clustering. In: 2014 22nd International conference on pattern recognition, pp 1532\u20131537. https:\/\/doi.org\/10.1109\/ICPR.2014.272","DOI":"10.1109\/ICPR.2014.272"},{"key":"6917_CR33","unstructured":"Peng X, Xiao S, Feng J, Yau W-Y, Yi Z (2016) Deep subspace clustering with sparsity prior. In: IJCAI, pp 1925\u20131931"},{"key":"6917_CR34","doi-asserted-by":"publisher","unstructured":"McConville R, Santos-Rodr\u00edguez R, Piechocki RJ, Craddock I (2021) N2d: (not too) deep clustering via clustering the local manifold of an autoencoded embedding. In: 2020 25th International conference on pattern recognition (ICPR), pp 5145\u20135152. https:\/\/doi.org\/10.1109\/ICPR48806.2021.9413131","DOI":"10.1109\/ICPR48806.2021.9413131"},{"key":"6917_CR35","unstructured":"Xie J, Girshick R, Farhadi A (2016) Unsupervised deep embedding for clustering analysis. In: Balcan MF, Weinberger KQ (eds) Proceedings of The 33rd International Conference on Machine Learning. Proceedings of Machine Learning Research, vol 48, pp 478\u2013487. PMLR, New York, New York, USA. https:\/\/proceedings.mlr.press\/v48\/xieb16.html"},{"key":"6917_CR36","doi-asserted-by":"publisher","first-page":"199","DOI":"10.1016\/j.neucom.2020.12.082","volume":"433","author":"J Wang","year":"2021","unstructured":"Wang J, Jiang J (2021) Unsupervised deep clustering via adaptive gmm modeling and optimization. Neurocomputing 433:199\u2013211. https:\/\/doi.org\/10.1016\/j.neucom.2020.12.082","journal-title":"Neurocomputing"},{"key":"6917_CR37","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2021.108386","volume":"123","author":"J Cai","year":"2022","unstructured":"Cai J, Wang S, Xu C, Guo W (2022) Unsupervised deep clustering via contractive feature representation and focal loss. Pattern Recogn 123:108386. https:\/\/doi.org\/10.1016\/j.patcog.2021.108386","journal-title":"Pattern Recogn"},{"key":"6917_CR38","unstructured":"Jiang Z, Zheng Y, Tan H, Tang B, Zhou H (2016) Variational deep embedding: A generative approach to clustering. CoRR abs\/1611.05148. arXiv:1611.05148"},{"key":"6917_CR39","unstructured":"Dilokthanakul N, Mediano PAM, Garnelo M, Lee MCH, Salimbeni H, Arulkumaran K, Shanahan M (2016) Deep unsupervised clustering with gaussian mixture variational autoencoders. CoRR abs\/1611.02648. arXiv:1611.02648"},{"key":"6917_CR40","unstructured":"Li X, Chen Z, Poon LK, Zhang NL (2018) Learning latent superstructures in variational autoencoders for deep multidimensional clustering. arXiv preprint arXiv:1803.05206"},{"issue":"10","key":"6917_CR41","doi-asserted-by":"publisher","first-page":"10344","DOI":"10.3934\/mbe.2022484","volume":"19","author":"H Ma","year":"2022","unstructured":"Ma H (2022) Achieving deep clustering through the use of variational autoencoders and similarity-based loss. Math Biosci Eng 19(10):10344\u201310360","journal-title":"Math Biosci Eng"},{"key":"6917_CR42","unstructured":"Springenberg JT (2016) Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks"},{"key":"6917_CR43","doi-asserted-by":"crossref","unstructured":"Mukherjee S, Asnani H, Lin E, Kannan S (2019) Clustergan: Latent space clustering in generative adversarial networks","DOI":"10.1609\/aaai.v33i01.33014610"},{"issue":"9","key":"6917_CR44","doi-asserted-by":"publisher","first-page":"6263","DOI":"10.1109\/TNNLS.2021.3135375","volume":"34","author":"X Yang","year":"2023","unstructured":"Yang X, Yan J, Cheng Y, Zhang Y (2023) Learning deep generative clustering via mutual information maximization. IEEE Trans Neural Netw Learn Syst 34(9):6263\u20136275. https:\/\/doi.org\/10.1109\/TNNLS.2021.3135375","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"6917_CR45","doi-asserted-by":"crossref","unstructured":"Zhang X, Liu H, Li Q, Wu X (2019) Attributed graph clustering via adaptive graph convolution. CoRR abs\/1906.01210. arXiv:1906.01210","DOI":"10.24963\/ijcai.2019\/601"},{"key":"6917_CR46","doi-asserted-by":"publisher","unstructured":"Zhu D, Chen S, Ma X, Du R (2020) Adaptive graph convolution using heat kernel for attributed graph clustering. Appl Sci 10(4). https:\/\/doi.org\/10.3390\/app10041473","DOI":"10.3390\/app10041473"},{"key":"6917_CR47","doi-asserted-by":"publisher","unstructured":"Bo D, Wang X, Shi C, Zhu M, Lu E, Cui P (2020) Structural deep clustering network. In: Proceedings of the web conference 2020. WWW \u201920, pp 1400\u20131410. Association for Computing Machinery, New York, NY, USA. https:\/\/doi.org\/10.1145\/3366423.3380214","DOI":"10.1145\/3366423.3380214"},{"key":"6917_CR48","unstructured":"Hoffman MD, Johnson MJ (2016) Elbo surgery: yet another way to carve up the variational evidence lower bound"},{"key":"6917_CR49","doi-asserted-by":"crossref","unstructured":"He K, Gkioxari G, Dollar P, Girshick R (2017) Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision (ICCV)","DOI":"10.1109\/ICCV.2017.322"},{"key":"6917_CR50","first-page":"363","volume":"2749","author":"G Farneb\u00e4ck","year":"2003","unstructured":"Farneb\u00e4ck G (2003) Two-frame motion estimation based on polynomial expansion. Image Anal 2749:363\u2013370","journal-title":"Image Anal"},{"key":"6917_CR51","unstructured":"Jocher G, Chaurasia A, Qiu J (2023) Ultralytics YOLOv8. https:\/\/github.com\/ultralytics\/ultralytics"},{"key":"6917_CR52","doi-asserted-by":"crossref","unstructured":"Lin T-Y, Maire M, Belongie S, Bourdev L, Girshick R, Hays J, Perona P, Ramanan D, Zitnick CL, Doll\u00e1r P (2015) Microsoft COCO: Common Objects in Context","DOI":"10.1007\/978-3-319-10602-1_48"}],"container-title":["Applied Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-025-06917-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10489-025-06917-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-025-06917-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,26]],"date-time":"2025-11-26T08:03:32Z","timestamp":1764144212000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10489-025-06917-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,23]]},"references-count":52,"journal-issue":{"issue":"16","published-print":{"date-parts":[[2025,11]]}},"alternative-id":["6917"],"URL":"https:\/\/doi.org\/10.1007\/s10489-025-06917-0","relation":{},"ISSN":["0924-669X","1573-7497"],"issn-type":[{"type":"print","value":"0924-669X"},{"type":"electronic","value":"1573-7497"}],"subject":[],"published":{"date-parts":[[2025,10,23]]},"assertion":[{"value":"14 October 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"16 September 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 October 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"This study was approved by the Ethics Committee of the Graduate School and Faculty of Medicine, Kyoto University, R3282.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical Approval"}}],"article-number":"1045"}}