{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T05:38:06Z","timestamp":1775799486079,"version":"3.50.1"},"reference-count":46,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,6,3]],"date-time":"2025-06-03T00:00:00Z","timestamp":1748908800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,6,3]],"date-time":"2025-06-03T00:00:00Z","timestamp":1748908800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2023YFC3321600"],"award-info":[{"award-number":["2023YFC3321600"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Fundamental Research Funds for Central Universities of the South-Central Minzu University","award":["CZY23026"],"award-info":[{"award-number":["CZY23026"]}]},{"name":"Academic Innovation Teams of South-Central Minzu University","award":["XTZ24006"],"award-info":[{"award-number":["XTZ24006"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Image Video Proc."],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Frame selection is a fundamental issue in video action recognition. It aims to minimize temporal redundancy and reduce computational cost. Current frame sampling strategies either rely on US based on motion, lacking emphasis on discriminative frames, or employ complex learning models or additional modal information, compromising generalizability. To address these challenges, this paper presents an adaptive frame selection strategy. It filters redundant frames through motion information and models relationships between each frame and others, thereby predicting the significance of each frame. This strategy combines the advantages of motion prior information and supervised learning. During training, frame importance-related constraints are integrated, guiding frames selection with strong discriminative features as inputs for the action recognition network. This frame selection method is integrated with backbone network structures such as TDN, GCTDN, AIM, and tested on three action datasets, Diving-48, UCF101 and HMDB51. The improvement on action recognition achieved is 4.4% on the Diving-48 dataset, 1.9% on the UCF101 dataset and 2.3% on HMDB51 dataset. Experimental results demonstrate that our selection strategy can be integrated with state-of-the-art action recognition models, leading to improved recognition performance.<\/jats:p>","DOI":"10.1186\/s13640-025-00675-2","type":"journal-article","created":{"date-parts":[[2025,6,3]],"date-time":"2025-06-03T01:11:14Z","timestamp":1748913074000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Motion-driven adaptive frame selection strategy for video action recognition"],"prefix":"10.1186","volume":"2025","author":[{"given":"Hao","family":"Ding","sequence":"first","affiliation":[]},{"given":"Chen","family":"Guo","sequence":"additional","affiliation":[]},{"given":"Jing","family":"Sun","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8712-1002","authenticated-orcid":false,"given":"Xiaoping","family":"Jiang","sequence":"additional","affiliation":[]},{"given":"Hongling","family":"Shi","sequence":"additional","affiliation":[]},{"given":"Jianjin","family":"Li","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,6,3]]},"reference":[{"key":"675_CR1","doi-asserted-by":"publisher","first-page":"7943","DOI":"10.1109\/TMM.2022.3232034","volume":"25","author":"F Wu","year":"2023","unstructured":"F. Wu, Q. Wang, J. Bian et al., A survey on video action recognition in sports: datasets, methods and applications. IEEE T. Multimedia. 25, 7943\u20137966 (2023). https:\/\/doi.org\/10.1109\/TMM.2022.3232034","journal-title":"IEEE T. Multimedia."},{"key":"675_CR2","doi-asserted-by":"publisher","first-page":"40","DOI":"10.1186\/s13640-018-0280-z","volume":"2018","author":"N Ejaz","year":"2018","unstructured":"N. Ejaz, S. Baik, H. Majeed et al., Multi-scale contrast and relative motion-based key frame extraction. J Image Video Proc. 2018, 40 (2018). https:\/\/doi.org\/10.1186\/s13640-018-0280-z","journal-title":"J Image Video Proc."},{"issue":"9","key":"675_CR3","doi-asserted-by":"publisher","first-page":"8122","DOI":"10.1109\/TCSVT.2024.3386553","volume":"34","author":"S Zhang","year":"2024","unstructured":"S. Zhang, J. Yin, Y. Dang et al., SiT-MLP: a simple MLP with point-wise topology feature learning for skeleton-based action recognition. IEEE T. Circ. Syst. Vid. 34(9), 8122\u20138134 (2024). https:\/\/doi.org\/10.1109\/TCSVT.2024.3386553","journal-title":"IEEE T. Circ. Syst. Vid."},{"key":"675_CR4","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1186\/s13640-020-00501-x","volume":"2020","author":"Y Zhao","year":"2020","unstructured":"Y. Zhao, K.L. Man, J. Smith et al., Improved two-stream model for human action recognition. J Image Video Proc. 2020, 24 (2020). https:\/\/doi.org\/10.1186\/s13640-020-00501-x","journal-title":"J Image Video Proc."},{"key":"675_CR5","doi-asserted-by":"publisher","unstructured":"M. Kim, P. H. Seo, C. Schmid, et al., Learning correlation structures for vision transformers. In Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Seattle, WA, USA, 2024; 18941-18951. https:\/\/doi.org\/10.1109\/CVPR52733.2024.01792","DOI":"10.1109\/CVPR52733.2024.01792"},{"issue":"12","key":"675_CR6","doi-asserted-by":"publisher","first-page":"7503","DOI":"10.1109\/TCSVT.2023.3274108","volume":"33","author":"Y Zhang","year":"2023","unstructured":"Y. Zhang, J. Zhao, Z. Chen et al., A closer look at video sampling for sequential action recognition. IEEE T. Circ. syst. Vid. 33(12), 7503\u20137514 (2023). https:\/\/doi.org\/10.1109\/TCSVT.2023.3274108","journal-title":"IEEE T. Circ. syst. Vid."},{"key":"675_CR7","doi-asserted-by":"publisher","unstructured":"K. Simonyan, A. Zisserman, Two-stream convolutional networks for action recognition in videos. in Proc. 27th Inter. Conf. Neu. Infor. Proc. Systems, Montreal, Canada, 2014; 568\u2013576. https:\/\/doi.org\/10.48550\/arXiv.1406.2199","DOI":"10.48550\/arXiv.1406.2199"},{"key":"675_CR8","doi-asserted-by":"publisher","unstructured":"C. Luo, A. L. Yuille, Grouped spatial-temporal aggregation for efficient action recognition. in Proc. IEEE Int. Conf. Comput. Vis., Seoul, Korea, 2019; 5512\u20135521. https:\/\/doi.org\/10.1109\/ICCV.2019.00561","DOI":"10.1109\/ICCV.2019.00561"},{"key":"675_CR9","doi-asserted-by":"publisher","unstructured":"Z. Qiu, T. Yao, T. Mei, Learning spatio-temporal representation with pseudo-3D residual networks. in Proc. IEEE Int. Conf. Comput. Vis., Venice, Italy, 5533\u20135541, 2017. https:\/\/doi.org\/10.1109\/ICCV.2017.590","DOI":"10.1109\/ICCV.2017.590"},{"key":"675_CR10","doi-asserted-by":"publisher","unstructured":"Z. Wu, C. Xiong, C. Y. Ma, et al., AdaFrame: adaptive frame selection for fast video recognition. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Los Angeles, USA, 2019; 1278\u20131287. https:\/\/doi.org\/10.1109\/CVPR.2019.00137","DOI":"10.1109\/CVPR.2019.00137"},{"key":"675_CR11","doi-asserted-by":"publisher","unstructured":"B. Korbar, D. Tran, L. Torresani, SCSampler: sampling salient clips from video for efficient action recognition. in Proc. IEEE\/CVF Int. Conf. Comput. Vis., Seoul, Korea, 2019; 6231\u20136241. https:\/\/doi.org\/10.1109\/ICCV.2019.00633","DOI":"10.1109\/ICCV.2019.00633"},{"key":"675_CR12","doi-asserted-by":"publisher","unstructured":"Y. Zhi, Z. Tong, L. Wang, et al., MGSampler: an explainable sampling strategy for video action recognition. in Proc. IEEE\/CVF Int. Conf. Comput. Vis., Virtual Only, 2021. https:\/\/doi.org\/10.48550\/arXiv.2104.09952","DOI":"10.48550\/arXiv.2104.09952"},{"key":"675_CR13","doi-asserted-by":"publisher","unstructured":"A. Kar, N. Rai, K. Sikka et al., Adascan: adaptive scan pooling in deep convolutional neural networks for human action recognition in videos. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Venice, Italy, 2017. https:\/\/doi.org\/10.1109\/CVPR.2017.604","DOI":"10.1109\/CVPR.2017.604"},{"key":"675_CR14","doi-asserted-by":"publisher","unstructured":"S. Sun, Z. Kuang, L. Sheng, et al., Optical flow guided feature: a fast and robust motion representation for video action recognition. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Salt Lake, Utah, USA, 2018; 1390\u20131399 . https:\/\/doi.org\/10.48550\/arXiv.1711.11152","DOI":"10.48550\/arXiv.1711.11152"},{"key":"675_CR15","doi-asserted-by":"publisher","unstructured":"D. Tran, L. Bourdev, R. Fergus, et al., Learning spatiotemporal features with 3d convolutional networks. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Santiago, Chile, 2015; 4489\u20134497. https:\/\/doi.org\/10.1109\/ICCV.2015.510","DOI":"10.1109\/ICCV.2015.510"},{"key":"675_CR16","doi-asserted-by":"publisher","unstructured":"C. Feichtenhofer, X3d: expanding architectures for efficient video recognition. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Seattle, WA, USA, 2020; 200\u2013210. https:\/\/doi.org\/10.1109\/CVPR42600.2020.00028","DOI":"10.1109\/CVPR42600.2020.00028"},{"key":"675_CR17","doi-asserted-by":"publisher","unstructured":"D. Tran, H. Wang, L. Torresani, et al., A closer look at spatiotemporal convolutions for action recognition. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Salt Lake, Utah, USA, 2018; 6450\u20136459. https:\/\/doi.org\/10.48550\/arXiv.1711.11248","DOI":"10.48550\/arXiv.1711.11248"},{"key":"675_CR18","doi-asserted-by":"publisher","unstructured":"J. Lin, C. Gan, S. Han, TSM: temporal shift module for efficient video understanding. in Proc. IEEE\/CVF Int. Conf. Comput. Vis., Seoul, Korea, 2019; 7082\u20137092. https:\/\/doi.org\/10.1109\/ICCV.2019.00718","DOI":"10.1109\/ICCV.2019.00718"},{"key":"675_CR19","doi-asserted-by":"publisher","unstructured":"L. Wang, Z. Tong, B. Ji, et al., TDN: temporal difference networks for efficient action recognition. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Virtual Only, 2021; 1895\u20131904. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00193","DOI":"10.1109\/CVPR46437.2021.00193"},{"key":"675_CR20","doi-asserted-by":"publisher","unstructured":"W. Xiang, C. Li, B. Wang, et al., Spatiotemporal self-attention modeling with temporal patch shift for action recognition. in Proc. Eur. Conf. Comput. Vis., Tel Aviv, Israel, 2022; 627\u2013644. https:\/\/doi.org\/10.48550\/arXiv.2207.13259","DOI":"10.48550\/arXiv.2207.13259"},{"key":"675_CR21","doi-asserted-by":"publisher","unstructured":"Y. Hao, H. Zhang, C. W. Ngo, et al., Group contextualization for video recognition. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., New Orleans, Louisiana, USA, 2022; 928\u2013938. https:\/\/doi.org\/10.48550\/arXiv.2203.09694","DOI":"10.48550\/arXiv.2203.09694"},{"key":"675_CR22","doi-asserted-by":"publisher","unstructured":"Y. H. Ng, M. Hausknecht, S. Vijayanarasimhan, et al., Beyond short snippets: deep networks for video classification. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Boston, Mass, USA, 2015; 4694\u20134702. https:\/\/doi.org\/10.1109\/CVPR.2015.7299101","DOI":"10.1109\/CVPR.2015.7299101"},{"key":"675_CR23","doi-asserted-by":"publisher","unstructured":"S. N. Gowda, M. Rohrbach, L. Sevilla-Lara, Smart frame selection for action recognition. in Proc. 35th AAAI Conf. Artif. Intell., Vancouver, Canada, 2021; 35(2), 1451\u20131459. https:\/\/doi.org\/10.48550\/arXiv.2012.10671","DOI":"10.48550\/arXiv.2012.10671"},{"key":"675_CR24","doi-asserted-by":"publisher","unstructured":"C. Szegedy, W. Liu, Y. Jia, et al., Going deeper with convolutions. in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Boston, Mass, USA, 2015; 1\u20139. https:\/\/doi.org\/10.1109\/CVPR.2015.7298594","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"675_CR25","doi-asserted-by":"publisher","unstructured":"Y. Li, Y. Li, N. Vasconcelos, Resound: towards action recognition without representation bias. in Eur. Conf. Comput. Vis., Munich, Germany, 2018; 513\u2013528. https:\/\/doi.org\/10.1007\/978-3-030-01231-1_32","DOI":"10.1007\/978-3-030-01231-1_32"},{"key":"675_CR26","doi-asserted-by":"publisher","unstructured":"K. Soomro, A. Roshan Zamir, M. Shah, UCF101: A Dataset of 101 human actions classes from videos in the wild. in CRCV-TR-12-01., 2012. https:\/\/doi.org\/10.48550\/arXiv.1212.0402","DOI":"10.48550\/arXiv.1212.0402"},{"key":"675_CR27","doi-asserted-by":"publisher","unstructured":"L. Wang, B. Huang, Z. Zhao, et al., VideoMAE V2: scaling video masked autoencoders with dual masking. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Vancouver, Canada, 2023; 14549\u201314560. https:\/\/doi.org\/10.48550\/arXiv.2303.16727","DOI":"10.48550\/arXiv.2303.16727"},{"key":"675_CR28","doi-asserted-by":"publisher","unstructured":"T. Yang, Y. Zhu, Y. Xie, et al., AIM: adapting image models for efficient video action recognition. in Int. Conf. Learn. Represent., Kigali, Rwanda, 2023. https:\/\/doi.org\/10.48550\/arXiv.2302.03024","DOI":"10.48550\/arXiv.2302.03024"},{"key":"675_CR29","doi-asserted-by":"publisher","unstructured":"M. Zhao, Y. Yu, X. Wang, et al., Search-Map-Search: a frame selection paradigm for action recognition. in Proc. IEEE Conf. Comput. Vis. Pattern Recog., Vancouver, Canada, 2023; 10627\u201310636. https:\/\/doi.org\/10.48550\/arXiv.2304.10316","DOI":"10.48550\/arXiv.2304.10316"},{"key":"675_CR30","doi-asserted-by":"publisher","unstructured":"W. Dong, Z. Zhang, T. Tan, Attention-aware sampling via deep reinforcement learning for action recognition. in Proc. AAAI Conf. Artif. Intell., Honolulu, Hawaii, USA, 2019; 33, 8247\u20138254. https:\/\/doi.org\/10.1609\/aaai.v33i01.33018247","DOI":"10.1609\/aaai.v33i01.33018247"},{"issue":"11","key":"675_CR31","doi-asserted-by":"publisher","first-page":"2740","DOI":"10.1109\/TPAMI.2018.2868668","volume":"41","author":"L Wang","year":"2019","unstructured":"L. Wang, Y. Xiong, Z. Wang et al., Temporal segment networks for action recognition in videos. IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2740\u20132755 (2019). https:\/\/doi.org\/10.1109\/TPAMI.2018.2868668","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"675_CR32","unstructured":"S. Zhang, Tfcnet: temporal fully connected networks for static unbiased temporal reasoning. https:\/\/arxiv.org\/pdf\/2203.05928.pdf"},{"key":"675_CR33","doi-asserted-by":"publisher","unstructured":"G. Bertasius, H. Wang, L. Torresani, Is space-time attention all you need for video understanding?. in Int. Conf. Mach. Learn., Virtual Only, 2021. https:\/\/doi.org\/10.48550\/arXiv.2102.05095","DOI":"10.48550\/arXiv.2102.05095"},{"key":"675_CR34","doi-asserted-by":"publisher","unstructured":"R. Wang, D. Chen, Z. Wu, et al., BEVT: BERT pretraining of video transformers. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., New Orleans, Louisiana, USA, 2022; 14713\u201314723. https:\/\/doi.org\/10.48550\/arXiv.2112.01529","DOI":"10.48550\/arXiv.2112.01529"},{"key":"675_CR35","doi-asserted-by":"publisher","unstructured":"K. He, X. Zhang, S Ren, et al., Deep residual learning for image recognition. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Las Vegas, NV, USA, 2016; 770\u2013778. https:\/\/doi.org\/10.48550\/arXiv.1512.03385","DOI":"10.48550\/arXiv.1512.03385"},{"key":"675_CR36","doi-asserted-by":"publisher","unstructured":"A. Dosovitskiy, L. Beyer, A. Kolesnikov, et al., An image is worth 16x16 words: transformers for image recognition at scale. in Int. Conf. Learn. Represent., Vienna, Austria, 2021. https:\/\/doi.org\/10.48550\/arXiv.2010.11929","DOI":"10.48550\/arXiv.2010.11929"},{"key":"675_CR37","doi-asserted-by":"publisher","unstructured":"R. Qian, T. Meng, B. Gong, et al., Spatiotemporal contrastive video representation learning. in Proc.IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Virtual Only, 2021. https:\/\/doi.org\/10.48550\/arXiv.2008.03800","DOI":"10.48550\/arXiv.2008.03800"},{"issue":"9","key":"675_CR38","doi-asserted-by":"publisher","first-page":"4839","DOI":"10.1109\/TPAMI.2021.3076522","volume":"44","author":"S Kumawat","year":"2022","unstructured":"S. Kumawat, M. Verma, Y. Nakashima et al., Depthwise spatio-temporal STFT convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 44(9), 4839\u20134851 (2022). https:\/\/doi.org\/10.1109\/TPAMI.2021.3076522","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"675_CR39","first-page":"10078","volume":"35","author":"Z Tong","year":"2022","unstructured":"Z. Tong, Y. Song, J. Wang et al., VideoMAE: masked autoencoders are data-efficient learners for self-supervised video pre-training. Adv. Neural Inform. Process. Syst. 35, 10078\u201310093 (2022)","journal-title":"Adv. Neural Inform. Process. Syst."},{"key":"675_CR40","first-page":"26462","volume":"35","author":"JT Pan","year":"2022","unstructured":"J.T. Pan, Z.Y. Lin, X.T. Zhu et al., St-adapter: parameter-efficient image-to-video transfer learning. Adv. Neural Inform. Process. Syst. 35, 26462\u201326477 (2022)","journal-title":"Adv. Neural Inform. Process. Syst."},{"issue":"3","key":"675_CR41","doi-asserted-by":"publisher","first-page":"3347","DOI":"10.1109\/TPAMI.2022.3173658","volume":"45","author":"M Wang","year":"2023","unstructured":"M. Wang, J. Xing, J. Su et al., Learning spatiotemporal and motion features in a unified 2D network for action recognition. IEEE T. Pattern Anal. 45(3), 3347\u20133362 (2023). https:\/\/doi.org\/10.1109\/TPAMI.2022.3173658","journal-title":"IEEE T. Pattern Anal."},{"key":"675_CR42","doi-asserted-by":"publisher","unstructured":"C. Zhang, A. Gupta, A. Zisserman, Temporal query networks for fine-grained video understanding. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Virtual Only, 2021; 4486\u20134496. https:\/\/doi.org\/10.48550\/arXiv.2104.09496","DOI":"10.48550\/arXiv.2104.09496"},{"issue":"9","key":"675_CR43","doi-asserted-by":"publisher","first-page":"10913","DOI":"10.1109\/TPAMI.2023.3268134","volume":"45","author":"S Sudhakaran","year":"2023","unstructured":"S. Sudhakaran, S. Escalera, O. Lanz, Gate-shift-fuse for video action recognition. IEEE T. Pattern Anal. 45(9), 10913\u201310928 (2023). https:\/\/doi.org\/10.1109\/TPAMI.2023.3268134","journal-title":"IEEE T. Pattern Anal."},{"key":"675_CR44","doi-asserted-by":"publisher","unstructured":"H. Kuehne, H. Jhuang, E. Garrote, et al., HMDB: a large video database for human motion recognition. In Proc. IEEE\/CVF Int. Conf. Comput. Vis., 2021; 2556-2563. https:\/\/doi.org\/10.1109\/ICCV.2011.61265","DOI":"10.1109\/ICCV.2011.61265"},{"key":"675_CR45","doi-asserted-by":"publisher","unstructured":"X. Wang, S. Zhang, Z. Qing, et al., MoLo: motion-augmented long-short contrastive learning for few-shot action recognition. in Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit., Vancouver, 2023; 18011-18021. https:\/\/doi.org\/10.1109\/CVPR52729.2023.01727","DOI":"10.1109\/CVPR52729.2023.01727"},{"key":"675_CR46","doi-asserted-by":"publisher","unstructured":"J. Carreira, A. Zisserman, Quo vadis, action recognition? A new model and the kinetics dataset. in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, 6299-6308. https:\/\/doi.org\/10.48550\/arXiv.1705.07750","DOI":"10.48550\/arXiv.1705.07750"}],"container-title":["EURASIP Journal on Image and Video Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13640-025-00675-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s13640-025-00675-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13640-025-00675-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,3]],"date-time":"2025-06-03T01:11:19Z","timestamp":1748913079000},"score":1,"resource":{"primary":{"URL":"https:\/\/jivp-eurasipjournals.springeropen.com\/articles\/10.1186\/s13640-025-00675-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,3]]},"references-count":46,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["675"],"URL":"https:\/\/doi.org\/10.1186\/s13640-025-00675-2","relation":{"has-preprint":[{"id-type":"doi","id":"10.21203\/rs.3.rs-5495634\/v1","asserted-by":"object"}]},"ISSN":["1687-5281"],"issn-type":[{"value":"1687-5281","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,3]]},"assertion":[{"value":"22 November 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 May 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 June 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"12"}}