{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T19:51:47Z","timestamp":1758311507065,"version":"3.44.0"},"reference-count":56,"publisher":"Springer Science and Business Media LLC","issue":"12","license":[{"start":{"date-parts":[[2025,7,19]],"date-time":"2025-07-19T00:00:00Z","timestamp":1752883200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,7,19]],"date-time":"2025-07-19T00:00:00Z","timestamp":1752883200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001554","name":"Massey University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100001554","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Appl Intell"],"published-print":{"date-parts":[[2025,8]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Object detectors perform unseen class detection by fine-tuning frozen visual-language models with visual prompts and natural language supervision. However, the current visual prompt tuning methods struggle to learn categorywise shared knowledge when using only one single visual prompt, due to the inaccessibility of unseen classes referred to as novel classes during training. This leads to isolation between the unknown novel classes and the base classes. Inspired by the recently developed RPN-based open-vocabulary object detection (OVOD) methods, we propose a region-aware visual prompt selection (RVPS) module to adaptively combine region features with best-matched visual prompts based on decoupled proxy embeddings. Additionally, we introduce a category-aware patchwise maximal aggregation (CPMA) module to explore the relationships among visual patches with respect to the category-specific maximum activation patches contained within the target region. We evaluate the proposed approach on an open-vocabulary benchmarks: COCO and LVIS. Compared with other state-of-the-art approaches, our method achieves a 1.2% AP<jats:inline-formula>\n              <jats:tex-math>$$_{50}$$<\/jats:tex-math>\n            <\/jats:inline-formula> improvement on COCO for novel classes and a 0.5% mask AP improvement on LVIS for rare categories.<\/jats:p>","DOI":"10.1007\/s10489-025-06651-7","type":"journal-article","created":{"date-parts":[[2025,7,19]],"date-time":"2025-07-19T10:58:44Z","timestamp":1752922724000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Open-vocabulary object detection with regionwise prompt selection and patch-based category-aware maximal activation"],"prefix":"10.1007","volume":"55","author":[{"given":"Zhaocheng","family":"Xu","sequence":"first","affiliation":[]},{"given":"Ruili","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Yan","family":"Tian","sequence":"additional","affiliation":[]},{"given":"Tao","family":"Yang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,7,19]]},"reference":[{"key":"6651_CR1","doi-asserted-by":"crossref","unstructured":"Hou X, Liu M, Zhang S, Wei P, Chen B, Lan X (2025) Relation detr: exploring explicit position relation prior for object detection. In: European conference on computer vision. pp 89\u2013105","DOI":"10.1007\/978-3-031-72973-7_6"},{"key":"6651_CR2","doi-asserted-by":"publisher","first-page":"120576","DOI":"10.1016\/j.ins.2024.120576","volume":"670","author":"X Tang","year":"2024","unstructured":"Tang X, Xu W, Li K, Han M, Ma Z, Wang R (2024) Piaenet: pyramid integration and attention enhanced network for object detection. Inf Sci 670:120576","journal-title":"Inf Sci"},{"key":"6651_CR3","doi-asserted-by":"crossref","unstructured":"Hollard L, Mohimont L, Gaveau N, Steffenel L-A (2024) Leyolo, new scalable and efficient cnn architecture for object detection. arXiv:2406.14239","DOI":"10.21428\/d82e957c.aed2cb06"},{"key":"6651_CR4","doi-asserted-by":"publisher","first-page":"126060","DOI":"10.1016\/j.eswa.2024.126060","volume":"266","author":"M Wang","year":"2025","unstructured":"Wang M, Heidari AA, Chen L, Wang R, Liu M, Shao L, Chen H (2025) Adaptive density-based clustering for many objective similarity or redundancy evolutionary optimization. Expert Syst Appl 266:126060","journal-title":"Expert Syst Appl"},{"key":"6651_CR5","doi-asserted-by":"publisher","first-page":"121503","DOI":"10.1016\/j.ins.2024.121503","volume":"689","author":"M Zong","year":"2025","unstructured":"Zong M, Ma Z, Zhu F, Ma Y, Wang R (2025) Laplacian eigenmaps based manifold regularized cnn for visual recognition. Inf Sci 689:121503","journal-title":"Inf Sci"},{"issue":"1","key":"6651_CR6","doi-asserted-by":"publisher","first-page":"947","DOI":"10.1007\/s10489-023-05245-5","volume":"54","author":"E Sun","year":"2024","unstructured":"Sun E, Zhou D, Tian Y, Xu Z, Wang X (2024) Transformer-based few-shot object detection in traffic scenarios. Appl Intell 54(1):947\u2013958","journal-title":"Appl Intell"},{"key":"6651_CR7","doi-asserted-by":"publisher","first-page":"30","DOI":"10.1016\/j.aiopen.2024.01.004","volume":"5","author":"Y Yao","year":"2024","unstructured":"Yao Y, Zhang A, Zhang Z, Liu Z, Chua T-S, Sun M (2024) Cpt: Colorful prompt tuning for pre-trained vision-language models. AI Open 5:30\u201338","journal-title":"AI Open"},{"key":"6651_CR8","doi-asserted-by":"crossref","unstructured":"Wu S, Zhang W, Jin S, Liu W, Loy CC (2023) Aligning bag of regions for open-vocabulary object detection. arXiv:2302.13996","DOI":"10.1109\/CVPR52729.2023.01464"},{"key":"6651_CR9","doi-asserted-by":"publisher","first-page":"111247","DOI":"10.1016\/j.patcog.2024.111247","volume":"161","author":"G Zhang","year":"2025","unstructured":"Zhang G, Chen Y, Zheng Y, Martin G, Wang R (2025) Local-enhanced representation for text-based person search. Pattern Recogn 161:111247","journal-title":"Pattern Recogn"},{"key":"6651_CR10","doi-asserted-by":"crossref","unstructured":"Wang Z, Zhou W, Xu J, Peng Y (2024) Sia-ovd: Shape-invariant adapter for bridging the image-region gap in open-vocabulary detection. In: Proceedings of the 32nd ACM international conference on multimedia. pp 4986\u20134994","DOI":"10.1145\/3664647.3680642"},{"key":"6651_CR11","doi-asserted-by":"crossref","unstructured":"Zhao X, Liu X, Wang D, Gao Y, Liu Z (2024) Scene-adaptive and region-aware multi-modal prompt for open vocabulary object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 16741\u201316750","DOI":"10.1109\/CVPR52733.2024.01584"},{"key":"6651_CR12","doi-asserted-by":"crossref","unstructured":"Wu X, Zhu F, Zhao R, Li H (2023) Cora: adapting clip for open-vocabulary detection with region prompting and anchor pre-matching. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 7031\u20137040","DOI":"10.1109\/CVPR52729.2023.00679"},{"key":"6651_CR13","doi-asserted-by":"crossref","unstructured":"Wang T (2023) Learning to detect and segment for open vocabulary object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 7051\u20137060","DOI":"10.1109\/CVPR52729.2023.00681"},{"key":"6651_CR14","doi-asserted-by":"crossref","unstructured":"Zareian A, Dela Rosa K, Hu DH, Chang S-F (2021) Open-vocabulary object detection using captions. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 14393\u201314402","DOI":"10.1109\/CVPR46437.2021.01416"},{"key":"6651_CR15","unstructured":"Kuo W, Cui Y, Gu X, Piergiovanni AJ, Angelova A (2022) F-vlm: open-vocabulary object detection upon frozen vision and language models. arXiv:2209.15639"},{"key":"6651_CR16","unstructured":"Gu X, Lin T-Y, Kuo W, Cui Y (2021) Open-vocabulary object detection via vision and language knowledge distillation. arXiv:2104.13921"},{"key":"6651_CR17","doi-asserted-by":"publisher","first-page":"65","DOI":"10.1016\/j.ins.2018.12.047","volume":"483","author":"Z Chen","year":"2019","unstructured":"Chen Z, Wang R, Zhang Z, Wang H, Xu L (2019) Background-foreground interaction for moving object detection in dynamic scenes. Inf Sci 483:65\u201381","journal-title":"Inf Sci"},{"key":"6651_CR18","doi-asserted-by":"crossref","unstructured":"Wang Y, Wang R, Fan X, Wang T, He X (2023) Pixels, regions, and objects: multiple enhancement for salient object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 10031\u201310040","DOI":"10.1109\/CVPR52729.2023.00967"},{"key":"6651_CR19","doi-asserted-by":"crossref","unstructured":"Ma Y, Zhou B, Wang R, Wang P (2023) Multi-stage factorized spatio-temporal representation for rgb-d action and gesture recognition. In: Proceedings of the 31st ACM international conference on multimedia. pp 3149\u20133160","DOI":"10.1145\/3581783.3612301"},{"key":"6651_CR20","unstructured":"Lin C, Sun P, Jiang Y, Luo P, Qu L, Haffari G, Yuan Z, Cai J (2022) Learning object-language alignments for open-vocabulary object detection. arXiv:2211.14843"},{"key":"6651_CR21","doi-asserted-by":"crossref","unstructured":"Du Y, Wei F, Zhang Z, Shi M, Gao Y, Li G (2022) Learning to prompt for open-vocabulary object detection with vision-language model. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 14084\u201314093","DOI":"10.1109\/CVPR52688.2022.01369"},{"issue":"22","key":"6651_CR22","doi-asserted-by":"publisher","first-page":"20285","DOI":"10.1007\/s00521-022-07578-7","volume":"34","author":"D Liu","year":"2022","unstructured":"Liu D, Tian Y, Xu Z, Jian G (2022) Handling occlusion in prohibited item detection from x-ray images. Neural Comput Appl 34(22):20285\u201320298","journal-title":"Neural Comput Appl"},{"key":"6651_CR23","doi-asserted-by":"publisher","first-page":"109905","DOI":"10.1016\/j.patcog.2023.109905","volume":"145","author":"Y Ma","year":"2024","unstructured":"Ma Y, Wang R (2024) Relative-position embedding based spatially and temporally decoupled transformer for action recognition. Pattern Recogn 145:109905","journal-title":"Pattern Recogn"},{"key":"6651_CR24","doi-asserted-by":"crossref","unstructured":"Zhou K, Yang J, Loy CC, Liu Z (2022) Conditional prompt learning for vision-language models. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 16816\u201316825","DOI":"10.1109\/CVPR52688.2022.01631"},{"key":"6651_CR25","doi-asserted-by":"crossref","unstructured":"Li J, Zhang J, Li J, Li G, Liu S, Lin L, Li G (2024) Learning background prompts to discover implicit knowledge for open vocabulary object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 16678\u201316687","DOI":"10.1109\/CVPR52733.2024.01578"},{"key":"6651_CR26","unstructured":"Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, et\u00a0al. (2021) Learning transferable visual models from natural language supervision. In: International conference on machine learning. pp 8748\u20138763"},{"issue":"12","key":"6651_CR27","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/JBHI.2023.3319361","volume":"27","author":"Y Tian","year":"2023","unstructured":"Tian Y, Jian G, Wang J, Chen H, Pan L, Xu Z, Li J, Wang R (2023) A revised approach to orthodontic treatment monitoring from oralscan video. IEEE J Biomed Health Inform 27(12):1\u201310","journal-title":"IEEE J Biomed Health Inform"},{"key":"6651_CR28","doi-asserted-by":"crossref","unstructured":"Jing C, Potgieter J, Noble F, Wang R (2017) A comparison and analysis of rgb-d cameras\u2019 depth performance for robotics application. In: 2017 24th International Conference on Mechatronics and Machine Vision in Practice (M2VIP). pp 1\u20136","DOI":"10.1109\/M2VIP.2017.8211432"},{"issue":"9","key":"6651_CR29","doi-asserted-by":"publisher","first-page":"2337","DOI":"10.1007\/s11263-022-01653-1","volume":"130","author":"K Zhou","year":"2022","unstructured":"Zhou K, Yang J, Loy CC, Liu Z (2022) Learning to prompt for vision-language models. Int J Comput Vision 130(9):2337\u20132348","journal-title":"Int J Comput Vision"},{"issue":"2","key":"6651_CR30","doi-asserted-by":"publisher","first-page":"2280","DOI":"10.1007\/s10489-022-03396-5","volume":"53","author":"S-X Zhang","year":"2023","unstructured":"Zhang S-X, Zhu X, Hou J-B, Yin X-C (2023) Graph fusion network for multi-oriented object detection. Appl Intell 53(2):2280\u20132294","journal-title":"Appl Intell"},{"issue":"30","key":"6651_CR31","doi-asserted-by":"crossref","first-page":"22071","DOI":"10.1007\/s00521-022-08016-4","volume":"35","author":"Y Ma","year":"2023","unstructured":"Ma Y, Ding W, Wang R, Gao Z, Cheng G, He L, Zhao X, Tian Y, Xu Z (2023) Survey on deep learning in multimodal medical imaging for cancer detection. Neural Comput Appl 35(30):22071\u201322085","journal-title":"Neural Comput Appl"},{"issue":"1","key":"6651_CR32","doi-asserted-by":"publisher","first-page":"447","DOI":"10.1109\/TII.2022.3148289","volume":"19","author":"Z Chen","year":"2022","unstructured":"Chen Z, Tian S, Shi X, Lu H (2022) Multiscale shared learning for fault diagnosis of rotating machinery in transportation infrastructures. IEEE Trans Industr Inf 19(1):447\u2013458","journal-title":"IEEE Trans Industr Inf"},{"key":"6651_CR33","doi-asserted-by":"crossref","unstructured":"Jia M, Tang L, Chen B-C, Cardie C, Belongie S, Hariharan B, Lim S-N (2022) Visual prompt tuning. In: Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part XXXIII. pp 709\u2013727","DOI":"10.1007\/978-3-031-19827-4_41"},{"key":"6651_CR34","unstructured":"Bahng H, Jahanian A, Sankaranarayanan S, Isola P (2022) Exploring visual prompts for adapting large-scale models. 1(3):4. arXiv:2203.17274"},{"issue":"12","key":"6651_CR35","first-page":"1","volume":"66","author":"Y Tian","year":"2023","unstructured":"Tian Y, Fu H, Wang H, Liu Y, Xu Z, Chen H, Li J, Wang R (2023) Rgb oralscan video-based orthodontic treatment monitoring. Sci China Inf Sci 66(12):1\u201310","journal-title":"Sci China Inf Sci"},{"key":"6651_CR36","doi-asserted-by":"crossref","unstructured":"Sun E, Zhou D, Tian Y, Xu Z, Wang X (2023) Transformer-based few-shot object detection in traffic scenarios. Appl Intell 1\u201312","DOI":"10.1007\/s10489-023-05245-5"},{"key":"6651_CR37","doi-asserted-by":"crossref","unstructured":"Zhao S, Zhang Z, Schulter S, Zhao L, Vijay\u00a0Kumar BG, Stathopoulos A, Chandraker M, Metaxas DN (2022) Exploiting unlabeled data with vision and language models for object detection. In: Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part IX. pp 159\u2013175","DOI":"10.1007\/978-3-031-20077-9_10"},{"key":"6651_CR38","doi-asserted-by":"crossref","unstructured":"Feng C, Zhong Y, Jie Z, Chu X, Ren H, Wei X, Xie W, Ma L (2022) Promptdet: Towards open-vocabulary detection using uncurated images. In: European conference on computer vision. pp 701\u2013717","DOI":"10.1007\/978-3-031-20077-9_41"},{"issue":"4","key":"6651_CR39","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3504033","volume":"18","author":"Y Tian","year":"2022","unstructured":"Tian Y, Zhang Y, Chen W-G, Liu D, Wang H, Xu H, Han J, Ge Y (2022) 3d tooth instance segmentation learning objectness and affinity in point cloud. ACM Trans Multimed Comput Commun Appl 18(4):1\u201316","journal-title":"ACM Trans Multimed Comput Commun Appl"},{"key":"6651_CR40","doi-asserted-by":"crossref","unstructured":"Sa L, Yu C, Hong Z, Zheng T, Liu S (2023) A broader study of cross-domain few-shot object detection. Appl Intell 1\u201321","DOI":"10.1007\/s10489-023-05082-6"},{"key":"6651_CR41","doi-asserted-by":"crossref","unstructured":"Wang L, Liu Y, Du P, Ding Z, Liao Y, Qi Q, Chen B, Liu S (2023) Object-aware distillation pyramid for open-vocabulary object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 11186\u201311196","DOI":"10.1109\/CVPR52729.2023.01076"},{"key":"6651_CR42","doi-asserted-by":"crossref","unstructured":"Kim D, Angelova A, Kuo W (2023) Region-aware pretraining for open-vocabulary object detection with vision transformers. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 11144\u201311154","DOI":"10.1109\/CVPR52729.2023.01072"},{"key":"6651_CR43","doi-asserted-by":"crossref","unstructured":"Zang Y, Li W, Zhou K, Huang C, Loy CC (2022) Open-vocabulary detr with conditional matching. In: Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part IX. pp 106\u2013122","DOI":"10.1007\/978-3-031-20077-9_7"},{"key":"6651_CR44","doi-asserted-by":"crossref","unstructured":"Bulat A, Guerrero R, Martinez B, Tzimiropoulos G (2022) Fs-detr: few-shot detection transformer with prompting and without re-training. arXiv:2210.04845","DOI":"10.1109\/ICCV51070.2023.01083"},{"key":"6651_CR45","unstructured":"Song H, Bang J (2023) Prompt-guided transformers for end-to-end open-vocabulary object detection. arXiv:2303.14386"},{"key":"6651_CR46","doi-asserted-by":"crossref","unstructured":"Zhong Y, Yang J, Zhang P, Li C, Codella N, Li LH, Zhou L, Dai X, Yuan L, Li Y, et\u00a0al (2022) Regionclip: region-based language-image pretraining. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 16793\u201316803","DOI":"10.1109\/CVPR52688.2022.01629"},{"key":"6651_CR47","first-page":"33781","volume":"35","author":"H Bangalath","year":"2022","unstructured":"Bangalath H, Maaz M, Khattak MU, Khan SH, Khan FS (2022) Bridging the gap between object and image-level representations for open-vocabulary detection. Adv Neural Inf Process Syst 35:33781\u201333794","journal-title":"Adv Neural Inf Process Syst"},{"key":"6651_CR48","doi-asserted-by":"crossref","unstructured":"Chowdhury PN, Bhunia AK, Sain A, Koley S, Xiang T, Song Y-Z (2023) What can human sketches do for object detection? In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 15083\u201315094","DOI":"10.1109\/CVPR52729.2023.01448"},{"issue":"20","key":"6651_CR49","doi-asserted-by":"publisher","first-page":"17371","DOI":"10.1007\/s00521-022-07379-y","volume":"34","author":"D Liu","year":"2022","unstructured":"Liu D, Tian Y, Zhang Y, Gelernter J, Wang X (2022) Heterogeneous data fusion and loss function design for tooth point cloud segmentation. Neural Comput Appl 34(20):17371\u201317380","journal-title":"Neural Comput Appl"},{"issue":"11","key":"6651_CR50","doi-asserted-by":"publisher","first-page":"14426","DOI":"10.1007\/s10489-022-04108-9","volume":"53","author":"Z Xiong","year":"2023","unstructured":"Xiong Z, Song T, He S, Yao Z, Wu X (2023) A unified and costless approach for improving small and long-tail object detection in aerial images of traffic scenarios. Appl Intell 53(11):14426\u201314447","journal-title":"Appl Intell"},{"key":"6651_CR51","unstructured":"Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, et\u00a0al. (2019) Pytorch: an imperative style, high-performance deep learning library. In: Proceedings of the advances in neural information processing systems. pp 8026\u20138037"},{"key":"6651_CR52","doi-asserted-by":"crossref","unstructured":"Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Doll\u00e1r P, Zitnick CL (2014) Microsoft coco: common objects in context. In: Computer Vision\u2013ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. Springer, pp 740\u2013755","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"6651_CR53","doi-asserted-by":"crossref","unstructured":"Gupta A, Dollar P, Girshick R (2019) Lvis: a dataset for large vocabulary instance segmentation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp 5356\u20135364","DOI":"10.1109\/CVPR.2019.00550"},{"key":"6651_CR54","doi-asserted-by":"crossref","unstructured":"Gao P, Geng S, Zhang R, Ma T, Fang R, Zhang Y, Li H, Qiao Y (2023) Clip-adapter: better vision-language models with feature adapters. Int J Comput Vision 1\u201315","DOI":"10.1007\/s11263-023-01891-x"},{"key":"6651_CR55","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"6651_CR56","doi-asserted-by":"crossref","unstructured":"Zhou X, Girdhar R, Joulin A, Kr\u00e4henb\u00fchl P, Misra I (2022) Detecting twenty-thousand classes using image-level supervision. In: Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part IX. pp 350\u2013368","DOI":"10.1007\/978-3-031-20077-9_21"}],"container-title":["Applied Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-025-06651-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10489-025-06651-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-025-06651-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T15:56:26Z","timestamp":1758297386000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10489-025-06651-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,19]]},"references-count":56,"journal-issue":{"issue":"12","published-print":{"date-parts":[[2025,8]]}},"alternative-id":["6651"],"URL":"https:\/\/doi.org\/10.1007\/s10489-025-06651-7","relation":{},"ISSN":["0924-669X","1573-7497"],"issn-type":[{"type":"print","value":"0924-669X"},{"type":"electronic","value":"1573-7497"}],"subject":[],"published":{"date-parts":[[2025,7,19]]},"assertion":[{"value":"13 May 2025","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 July 2025","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The research does not involve human participants and\/or animals. Consent for data used has already been fully informed.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics and informed consent for data used"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"880"}}