{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T09:09:29Z","timestamp":1771924169193,"version":"3.50.1"},"reference-count":75,"publisher":"Elsevier BV","license":[{"start":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T00:00:00Z","timestamp":1775001600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.elsevier.com\/tdm\/userlicense\/1.0\/"},{"start":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T00:00:00Z","timestamp":1775001600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.elsevier.com\/legal\/tdmrep-license"},{"start":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T00:00:00Z","timestamp":1775001600000},"content-version":"stm-asf","delay-in-days":0,"URL":"https:\/\/doi.org\/10.15223\/policy-017"},{"start":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T00:00:00Z","timestamp":1775001600000},"content-version":"stm-asf","delay-in-days":0,"URL":"https:\/\/doi.org\/10.15223\/policy-037"},{"start":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T00:00:00Z","timestamp":1775001600000},"content-version":"stm-asf","delay-in-days":0,"URL":"https:\/\/doi.org\/10.15223\/policy-012"},{"start":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T00:00:00Z","timestamp":1775001600000},"content-version":"stm-asf","delay-in-days":0,"URL":"https:\/\/doi.org\/10.15223\/policy-029"},{"start":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T00:00:00Z","timestamp":1775001600000},"content-version":"stm-asf","delay-in-days":0,"URL":"https:\/\/doi.org\/10.15223\/policy-004"}],"funder":[{"DOI":"10.13039\/501100004479","name":"Jiangxi Provincial Natural Science Foundation","doi-asserted-by":"publisher","award":["20242BAB25058"],"award-info":[{"award-number":["20242BAB25058"]}],"id":[{"id":"10.13039\/501100004479","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004479","name":"Jiangxi Provincial Natural Science Foundation","doi-asserted-by":"publisher","award":["20242BAB25075"],"award-info":[{"award-number":["20242BAB25075"]}],"id":[{"id":"10.13039\/501100004479","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004479","name":"Jiangxi Provincial Natural Science Foundation","doi-asserted-by":"publisher","award":["20252BAC250017"],"award-info":[{"award-number":["20252BAC250017"]}],"id":[{"id":"10.13039\/501100004479","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["elsevier.com","sciencedirect.com"],"crossmark-restriction":true},"short-container-title":["Signal Processing: Image Communication"],"published-print":{"date-parts":[[2026,4]]},"DOI":"10.1016\/j.image.2026.117503","type":"journal-article","created":{"date-parts":[[2026,2,4]],"date-time":"2026-02-04T16:31:19Z","timestamp":1770222679000},"page":"117503","update-policy":"https:\/\/doi.org\/10.1016\/elsevier_cm_policy","source":"Crossref","is-referenced-by-count":0,"special_numbering":"C","title":["Transformer tracking with multi-scale extended attention"],"prefix":"10.1016","volume":"143","author":[{"given":"Yuanyun","family":"Wang","sequence":"first","affiliation":[]},{"given":"Pengcheng","family":"Sha","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6750-5105","authenticated-orcid":false,"given":"Jun","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Yan","family":"Xia","sequence":"additional","affiliation":[]}],"member":"78","reference":[{"issue":"9","key":"10.1016\/j.image.2026.117503_b1","doi-asserted-by":"crossref","first-page":"1429","DOI":"10.1109\/TMM.2015.2455418","article-title":"On-road pedestrian tracking across multiple driving recorders","volume":"17","author":"Lee","year":"2015","journal-title":"IEEE Trans. Multimed."},{"issue":"8","key":"10.1016\/j.image.2026.117503_b2","doi-asserted-by":"crossref","DOI":"10.3390\/electronics11081208","article-title":"Metaheuristic optimization-based path planning and tracking of quadcopter for payload hold-release mission","volume":"11","author":"Belge","year":"2022","journal-title":"Electronics"},{"key":"10.1016\/j.image.2026.117503_b3","series-title":"Methods for autonomous tracking and surveillance","author":"Kokkeby","year":"2015"},{"issue":"12","key":"10.1016\/j.image.2026.117503_b4","doi-asserted-by":"crossref","DOI":"10.3390\/biology11121732","article-title":"Artificial intelligence-based robust hybrid algorithm design and implementation for real-time detection of plant diseases in agricultural environments","volume":"11","author":"Ya\u011f","year":"2022","journal-title":"Biology"},{"key":"10.1016\/j.image.2026.117503_b5","doi-asserted-by":"crossref","first-page":"10055","DOI":"10.1109\/TMM.2024.3405654","article-title":"Visual object tracking with mutual affinity aligned to human intuition","volume":"26","author":"Zeng","year":"2024","journal-title":"IEEE Trans. Multimed."},{"key":"10.1016\/j.image.2026.117503_b6","doi-asserted-by":"crossref","first-page":"160","DOI":"10.1016\/j.neucom.2022.02.027","article-title":"Siamsmdfff: Siamese network tracker based on shallow-middle-deep three-level feature fusion and clustering-based adaptive rectangular window filtering","volume":"483","author":"Luo","year":"2022","journal-title":"Neurocomputing"},{"key":"10.1016\/j.image.2026.117503_b7","series-title":"European Conference on Computer Vision","first-page":"850","article-title":"Fully-convolutional siamese networks for object tracking","author":"Bertinetto","year":"2016"},{"key":"10.1016\/j.image.2026.117503_b8","doi-asserted-by":"crossref","unstructured":"Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, Junjie Yan, Siamrpn++: Evolution of siamese visual tracking with very deep networks, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4282\u20134291.","DOI":"10.1109\/CVPR.2019.00441"},{"key":"10.1016\/j.image.2026.117503_b9","first-page":"12549","article-title":"Siamfc++: Towards robust and accurate visual tracking with target estimation guidelines","volume":"vol. 34","author":"Xu","year":"2020"},{"key":"10.1016\/j.image.2026.117503_b10","article-title":"Attention is all you need","volume":"30","author":"Vaswani","year":"2017","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"10.1016\/j.image.2026.117503_b11","doi-asserted-by":"crossref","unstructured":"Ning Wang, Wengang Zhou, Jie Wang, Houqiang Li, Transformer meets tracker: Exploiting temporal context for robust visual tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1571\u20131580.","DOI":"10.1109\/CVPR46437.2021.00162"},{"key":"10.1016\/j.image.2026.117503_b12","doi-asserted-by":"crossref","unstructured":"Christoph Mayer, Martin Danelljan, Goutam Bhat, Matthieu Paul, Danda Pani Paudel, Fisher Yu, Luc Van Gool, Transforming model prediction for tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8731\u20138740.","DOI":"10.1109\/CVPR52688.2022.00853"},{"key":"10.1016\/j.image.2026.117503_b13","doi-asserted-by":"crossref","unstructured":"Xin Chen, Bin Yan, Jiawen Zhu, Dong Wang, Xiaoyun Yang, Huchuan Lu, Transformer tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8126\u20138135.","DOI":"10.1109\/CVPR46437.2021.00803"},{"key":"10.1016\/j.image.2026.117503_b14","series-title":"International Conference on Machine Learning","first-page":"10347","article-title":"Training data-efficient image transformers & distillation through attention","author":"Touvron","year":"2021"},{"key":"10.1016\/j.image.2026.117503_b15","series-title":"Conditional positional encodings for vision transformers","author":"Chu","year":"2021"},{"key":"10.1016\/j.image.2026.117503_b16","series-title":"European Conference on Computer Vision","first-page":"445","article-title":"A benchmark and simulator for uav tracking","author":"Mueller","year":"2016"},{"issue":"5","key":"10.1016\/j.image.2026.117503_b17","doi-asserted-by":"crossref","first-page":"1562","DOI":"10.1109\/TPAMI.2019.2957464","article-title":"Got-10k: A large high-diversity benchmark for generic object tracking in the wild","volume":"43","author":"Huang","year":"2019","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"10.1016\/j.image.2026.117503_b18","doi-asserted-by":"crossref","unstructured":"Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, Haibin Ling, Lasot: A high-quality benchmark for large-scale single object tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 5374\u20135383.","DOI":"10.1109\/CVPR.2019.00552"},{"key":"10.1016\/j.image.2026.117503_b19","doi-asserted-by":"crossref","unstructured":"Xiao Wang, Xiujun Shu, Zhipeng Zhang, Bo Jiang, Yaowei Wang, Yonghong Tian, Feng Wu, Towards more flexible and accurate object tracking with natural language: Algorithms and benchmark, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 13763\u201313773.","DOI":"10.1109\/CVPR46437.2021.01355"},{"key":"10.1016\/j.image.2026.117503_b20","doi-asserted-by":"crossref","unstructured":"Matthias Muller, Adel Bibi, Silvio Giancola, Salman Alsubaihi, Bernard Ghanem, Trackingnet: A large-scale dataset and benchmark for object tracking in the wild, in: Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 300\u2013317.","DOI":"10.1007\/978-3-030-01246-5_19"},{"key":"10.1016\/j.image.2026.117503_b21","doi-asserted-by":"crossref","unstructured":"Hamed Kiani Galoogahi, Ashton Fagg, Chen Huang, Deva Ramanan, Simon Lucey, Need for speed: A benchmark for higher frame rate object tracking, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1125\u20131134.","DOI":"10.1109\/ICCV.2017.128"},{"key":"10.1016\/j.image.2026.117503_b22","series-title":"An image is worth 16x16 words: Transformers for image recognition at scale","author":"Dosovitskiy","year":"2020"},{"key":"10.1016\/j.image.2026.117503_b23","doi-asserted-by":"crossref","DOI":"10.1016\/j.sigpro.2024.109511","article-title":"Pearson\u2013matthews correlation coefficients for binary and multinary classification","volume":"222","author":"Stoica","year":"2024","journal-title":"Signal Process."},{"key":"10.1016\/j.image.2026.117503_b24","first-page":"2491","article-title":"Associating objects with transformers for video object segmentation","volume":"34","author":"Yang","year":"2021","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"10.1016\/j.image.2026.117503_b25","doi-asserted-by":"crossref","DOI":"10.1016\/j.image.2023.116981","article-title":"Accurate and robust visual tracking using bounding box refinement and online sample filtering","volume":"116","author":"Yang","year":"2023","journal-title":"Signal Process., Image Commun."},{"key":"10.1016\/j.image.2026.117503_b26","series-title":"International Conference on Machine Learning","first-page":"4904","article-title":"Scaling up visual and vision-language representation learning with noisy text supervision","author":"Jia","year":"2021"},{"key":"10.1016\/j.image.2026.117503_b27","series-title":"International Conference on Machine Learning","first-page":"8748","article-title":"Learning transferable visual models from natural language supervision","author":"Radford","year":"2021"},{"key":"10.1016\/j.image.2026.117503_b28","first-page":"9355","article-title":"Twins: Revisiting the design of spatial attention in vision transformers","volume":"34","author":"Chu","year":"2021","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"10.1016\/j.image.2026.117503_b29","doi-asserted-by":"crossref","unstructured":"Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao, Pyramid vision transformer: A versatile backbone for dense prediction without convolutions, in: Proceedings of the IEEE\/CVF International Conference on Computer Vision, 2021, pp. 568\u2013578.","DOI":"10.1109\/ICCV48922.2021.00061"},{"issue":"9","key":"10.1016\/j.image.2026.117503_b30","doi-asserted-by":"crossref","first-page":"1904","DOI":"10.1109\/TPAMI.2015.2389824","article-title":"Spatial pyramid pooling in deep convolutional networks for visual recognition","volume":"37","author":"He","year":"2015","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"10.1016\/j.image.2026.117503_b31","article-title":"Crossformer: aversatilevision transformer hinging on cross-scale attention","author":"Wang","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"10.1016\/j.image.2026.117503_b32","unstructured":"Youngwan Lee, Jonghee Kim, Jeffrey Willette, Sung Ju Hwang, Mpvit: Multi-path vision transformer for dense prediction, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7287\u20137296."},{"key":"10.1016\/j.image.2026.117503_b33","unstructured":"Sucheng Ren, Daquan Zhou, Shengfeng He, Jiashi Feng, Xinchao Wang, Shunted self-attention via multi-scale token aggregation, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10853\u201310862."},{"key":"10.1016\/j.image.2026.117503_b34","series-title":"European Conference on Computer Vision","first-page":"213","article-title":"End-to-end object detection with transformers","author":"Carion","year":"2020"},{"key":"10.1016\/j.image.2026.117503_b35","series-title":"Deformable detr: Deformable transformers for end-to-end object detection","author":"Zhu","year":"2020"},{"key":"10.1016\/j.image.2026.117503_b36","series-title":"European Conference on Computer Vision","first-page":"146","article-title":"Aiatrack: Attention in attention for transformer visual tracking","author":"Gao","year":"2022"},{"key":"10.1016\/j.image.2026.117503_b37","doi-asserted-by":"crossref","unstructured":"Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo, Swin transformer: Hierarchical vision transformer using shifted windows, in: Proceedings of the IEEE\/CVF International Conference on Computer Vision, 2021, pp. 10012\u201310022.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"10.1016\/j.image.2026.117503_b38","first-page":"16743","article-title":"Swintrack: A simple and strong baseline for transformer tracking","volume":"35","author":"Lin","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"10.1016\/j.image.2026.117503_b39","series-title":"Sparsett: Visual tracking with sparse transformers","author":"Fu","year":"2022"},{"key":"10.1016\/j.image.2026.117503_b40","doi-asserted-by":"crossref","first-page":"606","DOI":"10.1016\/j.ins.2022.12.082","article-title":"Siamese residual network for efficient visual tracking","volume":"624","author":"Fan","year":"2023","journal-title":"Inform. Sci."},{"key":"10.1016\/j.image.2026.117503_b41","series-title":"Multi-scale context aggregation by dilated convolutions","author":"Yu","year":"2015"},{"key":"10.1016\/j.image.2026.117503_b42","first-page":"2321","article-title":"Compact transformer tracker with correlative masked modeling","volume":"vol. 37","author":"Song","year":"2023"},{"key":"10.1016\/j.image.2026.117503_b43","series-title":"European Conference on Computer Vision","first-page":"740","article-title":"Microsoft coco: Common objects in context","author":"Lin","year":"2014"},{"key":"10.1016\/j.image.2026.117503_b44","doi-asserted-by":"crossref","DOI":"10.1016\/j.patcog.2024.111278","article-title":"Adaptively bypassing vision transformer blocks for efficient visual tracking","volume":"161","author":"Yang","year":"2025","journal-title":"Pattern Recognit."},{"issue":"8","key":"10.1016\/j.image.2026.117503_b45","doi-asserted-by":"crossref","first-page":"15502","DOI":"10.1109\/TNNLS.2025.3545752","article-title":"Exploring dynamic transformer for efficient object tracking","volume":"36","author":"Zhu","year":"2025","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"10.1016\/j.image.2026.117503_b46","doi-asserted-by":"crossref","unstructured":"Ram J. Zaveri, Shivang Patel, Yu Gu, Gianfranco Doretto, Improving Accuracy and Generalization for Efficient Visual Tracking, in: IEEE\/CVF Winter Conference on Applications of Computer Vision, 2025, pp. 9468\u20139478.","DOI":"10.1109\/WACV61041.2025.00917"},{"key":"10.1016\/j.image.2026.117503_b47","series-title":"Multi-attention associate prediction network for visual tracking","author":"Sun","year":"2024"},{"key":"10.1016\/j.image.2026.117503_b48","doi-asserted-by":"crossref","DOI":"10.1016\/j.eswa.2023.121377","article-title":"Spatio-temporal mix deformable feature extractor in visual tracking","volume":"237","author":"Huang","year":"2024","journal-title":"Expert Syst. Appl."},{"key":"10.1016\/j.image.2026.117503_b49","doi-asserted-by":"crossref","first-page":"326","DOI":"10.1109\/TMM.2023.3264851","article-title":"CMAT: integrating convolution mixer and self-attention for visual tracking","volume":"26","author":"Wang","year":"2023","journal-title":"IEEE Trans. Multimed."},{"key":"10.1016\/j.image.2026.117503_b50","doi-asserted-by":"crossref","unstructured":"Yanyan Shao, Shuting He, Qi Ye, Yuchao Feng, Wenhan Luo, Jiming Chen, Context-Aware Integration of Language and Visual References for Natural Language Tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 19208\u201319217.","DOI":"10.1109\/CVPR52733.2024.01817"},{"key":"10.1016\/j.image.2026.117503_b51","doi-asserted-by":"crossref","unstructured":"Ben Kang, Xin Chen, Dong Wang, Houwen Peng, Huchuan Lu, Exploring lightweight hierarchical vision transformers for efficient visual tracking, in: Proceedings of the IEEE\/CVF International Conference on Computer Vision, 2023, pp. 9612\u20139621.","DOI":"10.1109\/ICCV51070.2023.00881"},{"key":"10.1016\/j.image.2026.117503_b52","doi-asserted-by":"crossref","DOI":"10.1016\/j.eswa.2023.119890","article-title":"A joint local\u2013global search mechanism for long-term tracking with dynamic memory network","volume":"223","author":"Gao","year":"2023","journal-title":"Expert Syst. Appl."},{"key":"10.1016\/j.image.2026.117503_b53","doi-asserted-by":"crossref","unstructured":"Philippe Blatter, Menelaos Kanakis, Martin Danelljan, Luc Van Gool, Efficient visual tracking with exemplar transformers, in: Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 1571\u20131581.","DOI":"10.1109\/WACV56688.2023.00162"},{"key":"10.1016\/j.image.2026.117503_b54","doi-asserted-by":"crossref","unstructured":"Fei Xie, Chunyu Wang, Guangting Wang, Yue Cao, Wankou Yang, Wenjun Zeng, Correlation-aware deep tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8751\u20138760.","DOI":"10.1109\/CVPR52688.2022.00855"},{"key":"10.1016\/j.image.2026.117503_b55","doi-asserted-by":"crossref","unstructured":"Feng Tang, Qiang Ling, Ranking-Based Siamese Visual Tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8741\u20138750.","DOI":"10.1109\/CVPR52688.2022.00854"},{"key":"10.1016\/j.image.2026.117503_b56","series-title":"Learning localization-aware target confidence for siamese visual tracking","author":"Nie","year":"2022"},{"key":"10.1016\/j.image.2026.117503_b57","doi-asserted-by":"crossref","unstructured":"Daitao Xing, Nikolaos Evangeliou, Athanasios Tsoukalas, Anthony Tzes, Siamese transformer pyramid networks for real-time UAV tracking, in: Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 2139\u20132148.","DOI":"10.1109\/WACV51458.2022.00196"},{"key":"10.1016\/j.image.2026.117503_b58","series-title":"Context-aware visual tracking with joint meta-updating","author":"Shen","year":"2022"},{"key":"10.1016\/j.image.2026.117503_b59","series-title":"SRRT: Search region regulation tracking","author":"Zhu","year":"2022"},{"key":"10.1016\/j.image.2026.117503_b60","series-title":"Learning target-aware representation for visual tracking via informative interactions","author":"Guo","year":"2022"},{"key":"10.1016\/j.image.2026.117503_b61","doi-asserted-by":"crossref","unstructured":"Qiuhong Shen, Lei Qiao, Jinyang Guo, Peixia Li, Xin Li, Bo Li, Weitao Feng, Weihao Gan, Wei Wu, Wanli Ouyang, Unsupervised learning of accurate Siamese tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8101\u20138110.","DOI":"10.1109\/CVPR52688.2022.00793"},{"key":"10.1016\/j.image.2026.117503_b62","doi-asserted-by":"crossref","unstructured":"Zhipeng Zhang, Yihao Liu, Xiao Wang, Bing Li, Weiming Hu, Learn to match: Automatic matching network design for visual tracking, in: Proceedings of the IEEE\/CVF International Conference on Computer Vision, 2021, pp. 13339\u201313348.","DOI":"10.1109\/ICCV48922.2021.01309"},{"key":"10.1016\/j.image.2026.117503_b63","doi-asserted-by":"crossref","unstructured":"Zikun Zhou, Wenjie Pei, Xin Li, Hongpeng Wang, Feng Zheng, Zhenyu He, Saliency-associated object tracking, in: Proceedings of the IEEE\/CVF International Conference on Computer Vision, 2021, pp. 9866\u20139875.","DOI":"10.1109\/ICCV48922.2021.00972"},{"key":"10.1016\/j.image.2026.117503_b64","unstructured":"Dongyan Guo, Yanyan Shao, Ying Cui, Zhenhua Wang, Liyan Zhang, Chunhua Shen, Graph attention tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9543\u20139552."},{"key":"10.1016\/j.image.2026.117503_b65","series-title":"European Conference on Computer Vision","first-page":"205","article-title":"Know your surroundings: Exploiting scene information for object tracking","author":"Bhat","year":"2020"},{"key":"10.1016\/j.image.2026.117503_b66","series-title":"Fully convolutional online tracking","author":"Cui","year":"2020"},{"key":"10.1016\/j.image.2026.117503_b67","doi-asserted-by":"crossref","unstructured":"Martin Danelljan, Luc Van Gool, Radu Timofte, Probabilistic regression for visual tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 7183\u20137192.","DOI":"10.1109\/CVPR42600.2020.00721"},{"key":"10.1016\/j.image.2026.117503_b68","series-title":"European Conference on Computer Vision","first-page":"771","article-title":"Ocean: Object-aware anchor-free tracking","author":"Zhang","year":"2020"},{"key":"10.1016\/j.image.2026.117503_b69","doi-asserted-by":"crossref","unstructured":"Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, Huchuan Lu, Learning spatio-temporal transformer for visual tracking, in: Proceedings of the IEEE\/CVF International Conference on Computer Vision, 2021, pp. 10448\u201310457.","DOI":"10.1109\/ICCV48922.2021.01028"},{"key":"10.1016\/j.image.2026.117503_b70","series-title":"Correlation-embedded transformer tracking: A single-branch framework","author":"Xie","year":"2024"},{"key":"10.1016\/j.image.2026.117503_b71","series-title":"European Conference on Computer Vision","first-page":"341","article-title":"Joint feature learning and relation modeling for tracking: A one-stream framework","author":"Ye","year":"2022"},{"key":"10.1016\/j.image.2026.117503_b72","doi-asserted-by":"crossref","unstructured":"Haojie Zhao, Dong Wang, Huchuan Lu, Representation learning for visual object tracking by masked appearance transfer, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 18696\u201318705.","DOI":"10.1109\/CVPR52729.2023.01793"},{"key":"10.1016\/j.image.2026.117503_b73","first-page":"8727","article-title":"Robust tracking via mamba-based context-aware token learning","volume":"vol. 39","author":"Xie","year":"2025"},{"key":"10.1016\/j.image.2026.117503_b74","doi-asserted-by":"crossref","unstructured":"Xin Chen, Houwen Peng, Dong Wang, Huchuan Lu, Han Hu, Seqtrack: Sequence to sequence learning for visual object tracking, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 14572\u201314581.","DOI":"10.1109\/CVPR52729.2023.01400"},{"key":"10.1016\/j.image.2026.117503_b75","doi-asserted-by":"crossref","unstructured":"Yutao Cui, Cheng Jiang, Limin Wang, Gangshan Wu, Mixformer: End-to-end tracking with iterative mixed attention, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13608\u201313618.","DOI":"10.1109\/CVPR52688.2022.01324"}],"container-title":["Signal Processing: Image Communication"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/api.elsevier.com\/content\/article\/PII:S0923596526000263?httpAccept=text\/xml","content-type":"text\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/api.elsevier.com\/content\/article\/PII:S0923596526000263?httpAccept=text\/plain","content-type":"text\/plain","content-version":"vor","intended-application":"text-mining"}],"deposited":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T08:15:16Z","timestamp":1771920916000},"score":1,"resource":{"primary":{"URL":"https:\/\/linkinghub.elsevier.com\/retrieve\/pii\/S0923596526000263"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,4]]},"references-count":75,"alternative-id":["S0923596526000263"],"URL":"https:\/\/doi.org\/10.1016\/j.image.2026.117503","relation":{},"ISSN":["0923-5965"],"issn-type":[{"value":"0923-5965","type":"print"}],"subject":[],"published":{"date-parts":[[2026,4]]},"assertion":[{"value":"Elsevier","name":"publisher","label":"This article is maintained by"},{"value":"Transformer tracking with multi-scale extended attention","name":"articletitle","label":"Article Title"},{"value":"Signal Processing: Image Communication","name":"journaltitle","label":"Journal Title"},{"value":"https:\/\/doi.org\/10.1016\/j.image.2026.117503","name":"articlelink","label":"CrossRef DOI link to publisher maintained version"},{"value":"article","name":"content_type","label":"Content Type"},{"value":"\u00a9 2026 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies.","name":"copyright","label":"Copyright"}],"article-number":"117503"}}