{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,29]],"date-time":"2026-01-29T19:26:34Z","timestamp":1769714794174,"version":"3.49.0"},"reference-count":60,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2023,3,23]],"date-time":"2023-03-23T00:00:00Z","timestamp":1679529600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62171381"],"award-info":[{"award-number":["62171381"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62201445"],"award-info":[{"award-number":["62201445"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["CX2021080"],"award-info":[{"award-number":["CX2021080"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University","award":["62171381"],"award-info":[{"award-number":["62171381"]}]},{"name":"Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University","award":["62201445"],"award-info":[{"award-number":["62201445"]}]},{"name":"Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University","award":["CX2021080"],"award-info":[{"award-number":["CX2021080"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Hyperspectral videos (HSVs) can record more adequate detail clues than other videos, which is especially beneficial in cases of abundant spectral information. Although traditional methods based on correlation filters (CFs) employed to explore spectral information locally achieve promising results, their performances are limited by ignoring global information. In this paper, a joint spectral\u2013spatial information method, named spectral\u2013spatial transformer-based feature fusion tracker (SSTFT), is proposed for hyperspectral video tracking, which is capable of utilizing spectral\u2013spatial features and considering global interactions. Specifically, the feature extraction module employs two parallel branches to extract multiple-level coarse-grained and fine-grained spectral\u2013spatial features, which are fused with adaptive weights. The extracted features are further fused with the context fusion module based on a transformer with the hyperspectral self-attention (HSA) and hyperspectral cross-attention (HCA), which are designed to capture the self-context feature interaction and the cross-context feature interaction, respectively. Furthermore, an adaptive dynamic template updating strategy is used to update the template bounding box based on the prediction score. The extensive experimental results on benchmark hyperspectral video tracking datasets demonstrated that the proposed SSTFT outperforms the state-of-the-art methods in both precision and speed.<\/jats:p>","DOI":"10.3390\/rs15071735","type":"journal-article","created":{"date-parts":[[2023,3,24]],"date-time":"2023-03-24T02:34:54Z","timestamp":1679625294000},"page":"1735","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":23,"title":["A Spectral\u2013Spatial Transformer Fusion Method for Hyperspectral Video Tracking"],"prefix":"10.3390","volume":"15","author":[{"given":"Ye","family":"Wang","sequence":"first","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710129, China"}]},{"given":"Yuheng","family":"Liu","sequence":"additional","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710129, China"}]},{"given":"Mingyang","family":"Ma","sequence":"additional","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710129, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8018-596X","authenticated-orcid":false,"given":"Shaohui","family":"Mei","sequence":"additional","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710129, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,3,23]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Chen, X., Yan, B., Zhu, J., Wang, D., Yang, X., and Lu, H. (2021, January 19\u201325). Transformer Tracking. Proceedings of the 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Online.","DOI":"10.1109\/CVPR46437.2021.00803"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"323","DOI":"10.1016\/j.patcog.2017.11.007","article-title":"Deep visual tracking: Review and experimental comparison","volume":"76","author":"Li","year":"2018","journal-title":"Pattern Recognit."},{"key":"ref_3","first-page":"44","article-title":"A survey on moving object detection and tracking in video surveillance system","volume":"2","author":"Joshi","year":"2012","journal-title":"Int. J. Soft Comput. Eng."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"3153","DOI":"10.1007\/s11192-021-03868-4","article-title":"Tracking developments in artificial intelligence research: Constructing and applying a new search strategy","volume":"126","author":"Liu","year":"2021","journal-title":"Scientometrics"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"149","DOI":"10.1109\/TITS.2018.2804894","article-title":"Multi-vehicle tracking using microscopic traffic models","volume":"20","author":"Song","year":"2018","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_6","first-page":"1","article-title":"A survey of appearance models in visual object tracking","volume":"4","author":"Li","year":"2013","journal-title":"ACM Trans. Intell. Syst. Technol. TIST"},{"key":"ref_7","first-page":"5511812","article-title":"Semi-Supervised Locality Preserving Dense Graph Neural Network with ARMA Filters and Context-Aware Learning for Hyperspectral Image Classification","volume":"60","author":"Ding","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_8","first-page":"5536716","article-title":"Unsupervised Self-Correlated Learning Smoothy Enhanced Locality Preserving Graph Convolution Embedding Clustering for Hyperspectral Images","volume":"60","author":"Ding","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_9","first-page":"5536016","article-title":"Self-Supervised Locality Preserving Low-Pass Graph Convolutional Embedding for Large-Scale Hyperspectral Image Clustering","volume":"60","author":"Ding","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"201","DOI":"10.1016\/j.ins.2022.04.006","article-title":"AF2GNN: Graph convolution with adaptive filters and aggregator fusion for hyperspectral image classification","volume":"602","author":"Ding","year":"2022","journal-title":"Inf. Sci."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"246","DOI":"10.1016\/j.neucom.2022.06.031","article-title":"Multi-feature fusion: Graph neural network and CNN combining for hyperspectral image classification","volume":"501","author":"Ding","year":"2022","journal-title":"Neurocomputing"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"119508","DOI":"10.1016\/j.eswa.2023.119508","article-title":"Multireceptive field: An adaptive path aggregation graph neural framework for hyperspectral image classification","volume":"217","author":"Zhang","year":"2023","journal-title":"Expert Syst. Appl."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Chen, L., Zhao, Y., Yao, J., Chen, J., Li, N., Chan, J.C.W., and Kong, S.G. (2021). Object Tracking in Hyperspectral-Oriented Video with Fast Spatial-Spectral Features. Remote Sens., 13.","DOI":"10.3390\/rs13101922"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"449","DOI":"10.1109\/TGRS.2018.2856370","article-title":"Tracking in aerial hyperspectral videos using deep kernelized correlation filters","volume":"57","author":"Uzkent","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"3719","DOI":"10.1109\/TIP.2020.2965302","article-title":"Material Based Object Tracking in Hyperspectral Videos","volume":"29","author":"Xiong","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Valmadre, J., Bertinetto, L., Henriques, J., Vedaldi, A., and Torr, P.H. (2017, January 21\u201326). End-to-end representation learning for correlation filter based tracking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.531"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"671","DOI":"10.1007\/s11263-017-1061-3","article-title":"Discriminative correlation filter Tracner with channel and spatial reliability","volume":"126","author":"Matas","year":"2018","journal-title":"Int. J. Comput. Vis."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Sun, C., Wang, D., Lu, H., and Yang, M.H. (2018, January 18\u201323). Correlation tracking via joint discrimination and reliability learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00058"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Liu, Z., Wang, X., Shu, M., Li, G., Sun, C., Liu, Z., and Zhong, Y. (2021, January 14\u201316). An anchor-free Siamese target tracking network for hyperspectral video. Proceedings of the 2021 11th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Amsterdam, The Netherlands.","DOI":"10.1109\/WHISPERS52202.2021.9483958"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Lei, J., Liu, P., Xie, W., Gao, L., Li, Y., and Du, Q. (2022). Spatial-Spectral Cross-Correlation Embedded Dual-Transfer Network for Object Tracking Using Hyperspectral Videos. Remote Sens., 14.","DOI":"10.3390\/rs14153512"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Zhu, X., Zhao, D., Arun, P.V., Zhou, H., Qian, K., and Hu, J. (2022). Hyperspectral Video Target Tracking Based on Deep Features with Spectral Matching Reduction and Adaptive Scale 3D Hog Features. Remote Sens., 14.","DOI":"10.3390\/rs14235958"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"5542515","DOI":"10.1109\/TGRS.2022.3215816","article-title":"TFTN: A Transformer-Based Fusion Tracking Framework of Hyperspectral and RGB","volume":"60","author":"Zhao","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"7116","DOI":"10.1109\/TIP.2022.3216995","article-title":"SiamHYPER: Learning a Hyperspectral Object Tracker From an RGB-Based Tracker","volume":"31","author":"Liu","year":"2022","journal-title":"IEEE Trans. Image Process."},{"key":"ref_24","first-page":"5513814","article-title":"Unsupervised Deep Hyperspectral Video Target Tracking and High Spectral-Spatial-Temporal Resolution (H\u00b3) Benchmark Dataset","volume":"60","author":"Liu","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1016\/j.isprsjprs.2021.10.018","article-title":"Histograms of oriented mosaic gradients for snapshot spectral image description","volume":"183","author":"Chen","year":"2022","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Wang, N., Zhou, W., Wang, J., and Li, H. (2021, January 19\u201325). Transformer meets tracker: Exploiting temporal context for robust visual tracking. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Online.","DOI":"10.1109\/CVPR46437.2021.00162"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Yan, B., Peng, H., Fu, J., Wang, D., and Lu, H. (2021, January 11\u201317). Learning spatio-temporal transformer for visual tracking. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.01028"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Yu, B., Tang, M., Zheng, L., Zhu, G., Wang, J., Feng, H., Feng, X., and Lu, H. (2021, January 11\u201317). High-performance discriminative tracking with transformers. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00971"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Zhao, C., Liu, H., Su, N., Xu, C., Yan, Y., and Feng, S. (2023). TMTNet: A Transformer-Based Multimodality Information Transfer Network for Hyperspectral Object Tracking. Remote Sens., 15.","DOI":"10.3390\/rs15041107"},{"key":"ref_30","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30, Long Beach, CA, USA."},{"key":"ref_31","unstructured":"Park, N., and Kim, S. (2022). How Do Vision Transformers Work?. arXiv."},{"key":"ref_32","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16 \u00d7 16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"87","DOI":"10.1109\/TPAMI.2022.3152247","article-title":"A survey on vision transformer","volume":"45","author":"Han","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Cao, Z., Huang, Z., Pan, L., Zhang, S., Liu, Z., and Fu, C. (2022, January 18\u201324). TCTrack: Temporal Contexts for Aerial Tracking. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01438"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Mayer, C., Danelljan, M., Bhat, G., Paul, M., Paudel, D.P., Yu, F., and Van Gool, L. (2022, January 18\u201324). Transforming model prediction for tracking. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00853"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"225","DOI":"10.1016\/j.aiopen.2021.08.002","article-title":"Pre-trained models: Past, present and future","volume":"2","author":"Han","year":"2021","journal-title":"AI Open"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"583","DOI":"10.1109\/TPAMI.2014.2345390","article-title":"High-speed tracking with kernelized correlation filters","volume":"37","author":"Henriques","year":"2014","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Kiani Galoogahi, H., Fagg, A., and Lucey, S. (2017, January 21\u201326). Learning background-aware correlation filters for visual tracking. Proceedings of the IEEE International Conference on Computer Vision, Honolulu, HI, USA.","DOI":"10.1109\/ICCV.2017.129"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Wang, N., Zhou, W., Tian, Q., Hong, R., Wang, M., and Li, H. (2018, January 18\u201323). Multi-cue correlation filters for robust visual tracking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00509"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"1561","DOI":"10.1109\/TPAMI.2016.2609928","article-title":"Discriminative scale space tracking","volume":"39","author":"Danelljan","year":"2016","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Hare, S., Saffari, A., Torr, P., and Struck, S. (2011, January 6\u201313). Structured output tracking with kernels. Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain.","DOI":"10.1109\/ICCV.2011.6126251"},{"key":"ref_43","unstructured":"Li, Y., and Zhu, J. (2014). Computer Vision\u2014ECCV 2014 Workshops, Springer."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Li, F., Tian, C., Zuo, W., Zhang, L., and Yang, M.H. (2018, January 18\u201323). Learning spatial-temporal regularized correlation filters for visual tracking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00515"},{"key":"ref_45","unstructured":"Dalal, N., and Triggs, B. (2005, January 21\u201323). Histograms of oriented gradients for human detection. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR\u201905), San Diego, CA, USA."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Lukezic, A., Vojir, T., Cehovin Zajc, L., Matas, J., and Kristan, M. (2017, January 21\u201326). Discriminative correlation filter with channel and spatial reliability. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.515"},{"key":"ref_47","unstructured":"Henriques, J.F., Caseiro, R., Martins, P., and Batista, J. (2012). Computer Vision\u2014ECCV 2012, Springer."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Danelljan, M., Bhat, G., Shahbaz Khan, F., and Felsberg, M. (2017, January 21\u201326). Eco: Efficient convolution operators for tracking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.733"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Danelljan, M., H\u00e4ger, G., Khan, F., and Felsberg, M. (2014, January 1\u20135). Accurate Scale Estimation for Robust Visual Tracking. Proceedings of the British Machine Vision Conference, Nottingham, UK.","DOI":"10.5244\/C.28.65"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Li, Y., Zhu, J., Hoi, S.C., Song, W., Wang, Z., and Liu, H. (2019, January 29\u201331). Robust estimation of similarity transformation for visual object tracking. Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA.","DOI":"10.1609\/aaai.v33i01.33018666"},{"key":"ref_51","unstructured":"Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., and Torr, P.H. (2016). Computer Vision\u2014ECCV 2016 Workshops, Springer."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Li, B., Yan, J., Wu, W., Zhu, Z., and Hu, X. (2018, January 18\u201323). High performance visual tracking with siamese region proposal network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00935"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Li, B., Wu, W., Wang, Q., Zhang, F., Xing, J., and Yan, J.S. (2019, January 15\u201320). Evolution of siamese visual tracking with very deep networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00441"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Zhu, Z., Wang, Q., Li, B., Wu, W., Yan, J., and Hu, W. (2018, January 8\u201314). Distractor-aware siamese networks for visual object tracking. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01240-3_7"},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Chen, Z., Zhong, B., Li, G., Zhang, S., and Ji, R. (2020, January 13\u201319). Siamese box adaptive network for visual tracking. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00670"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Nam, H., and Han, B. (2016, January 27\u201330). Learning multi-domain convolutional neural networks for visual tracking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.465"},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Li, Z., Xiong, F., Zhou, J., Wang, J., Lu, J., and Qian, Y. (2020, January 25\u201328). BAE-Net: A band attention aware ensemble network for hyperspectral object tracking. Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates.","DOI":"10.1109\/ICIP40778.2020.9191105"},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Song, Y., Ma, C., Wu, X., Gong, L., Bao, L., Zuo, W., Shen, C., Lau, R.W., and Yang, M.H. (2018, January 18\u201323). Vital: Visual tracking via adversarial learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00937"},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Guo, D., Wang, J., Cui, Y., Wang, Z., and Chen, S. (2020, January 13\u201319). SiamCAR: Siamese fully convolutional classification and regression for visual tracking. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00630"},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Zhang, L., Gonzalez-Garcia, A., van de Weijer, J., Danelljan, M., and Khan, F.S. (2019, January 21\u201326). Learning the model update for siamese trackers. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Honolulu, HI, USA.","DOI":"10.1109\/ICCV.2019.00411"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/7\/1735\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:01:46Z","timestamp":1760122906000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/7\/1735"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,3,23]]},"references-count":60,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2023,4]]}},"alternative-id":["rs15071735"],"URL":"https:\/\/doi.org\/10.3390\/rs15071735","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,3,23]]}}}