{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,22]],"date-time":"2026-04-22T18:59:42Z","timestamp":1776884382692,"version":"3.51.2"},"reference-count":66,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2023,4,23]],"date-time":"2023-04-23T00:00:00Z","timestamp":1682208000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61976098"],"award-info":[{"award-number":["61976098"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["2022J06023"],"award-info":[{"award-number":["2022J06023"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["2021FX03"],"award-info":[{"award-number":["2021FX03"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Natural Science Foundation for Outstanding Young Scholars of Fujian Province","award":["61976098"],"award-info":[{"award-number":["61976098"]}]},{"name":"Natural Science Foundation for Outstanding Young Scholars of Fujian Province","award":["2022J06023"],"award-info":[{"award-number":["2022J06023"]}]},{"name":"Natural Science Foundation for Outstanding Young Scholars of Fujian Province","award":["2021FX03"],"award-info":[{"award-number":["2021FX03"]}]},{"name":"Collaborative Innovation Platform Project of Fuzhou-Xiamen-Quanzhou National Independent Innovation Demonstration Zone","award":["61976098"],"award-info":[{"award-number":["61976098"]}]},{"name":"Collaborative Innovation Platform Project of Fuzhou-Xiamen-Quanzhou National Independent Innovation Demonstration Zone","award":["2022J06023"],"award-info":[{"award-number":["2022J06023"]}]},{"name":"Collaborative Innovation Platform Project of Fuzhou-Xiamen-Quanzhou National Independent Innovation Demonstration Zone","award":["2021FX03"],"award-info":[{"award-number":["2021FX03"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Multi-modal (i.e., visible, near-infrared, and thermal-infrared) vehicle re-identification has good potential to search vehicles of interest in low illumination. However, due to the fact that different modalities have varying imaging characteristics, a proper multi-modal complementary information fusion is crucial to multi-modal vehicle re-identification. For that, this paper proposes a progressively hybrid transformer (PHT). The PHT method consists of two aspects: random hybrid augmentation (RHA) and a feature hybrid mechanism (FHM). Regarding RHA, an image random cropper and a local region hybrider are designed. The image random cropper simultaneously crops multi-modal images of random positions, random numbers, random sizes, and random aspect ratios to generate local regions. The local region hybrider fuses the cropped regions to let regions of each modal bring local structural characteristics of all modalities, mitigating modal differences at the beginning of feature learning. Regarding the FHM, a modal-specific controller and a modal information embedding are designed to effectively fuse multi-modal information at the feature level. Experimental results show the proposed method wins the state-of-the-art method by a larger 2.7% mAP on RGBNT100 and a larger 6.6% mAP on RGBN300, demonstrating that the proposed method can learn multi-modal complementary information effectively.<\/jats:p>","DOI":"10.3390\/s23094206","type":"journal-article","created":{"date-parts":[[2023,4,24]],"date-time":"2023-04-24T03:04:08Z","timestamp":1682305448000},"page":"4206","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":24,"title":["Progressively Hybrid Transformer for Multi-Modal Vehicle Re-Identification"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4648-6491","authenticated-orcid":false,"given":"Wenjie","family":"Pan","sequence":"first","affiliation":[{"name":"College of Engineering, Huaqiao University, Quanzhou 362021, China"}]},{"given":"Linhan","family":"Huang","sequence":"additional","affiliation":[{"name":"College of Engineering, Huaqiao University, Quanzhou 362021, China"}]},{"given":"Jianbao","family":"Liang","sequence":"additional","affiliation":[{"name":"College of Engineering, Huaqiao University, Quanzhou 362021, China"}]},{"given":"Lan","family":"Hong","sequence":"additional","affiliation":[{"name":"College of Engineering, Huaqiao University, Quanzhou 362021, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8840-3629","authenticated-orcid":false,"given":"Jianqing","family":"Zhu","sequence":"additional","affiliation":[{"name":"College of Engineering, Huaqiao University, Quanzhou 362021, China"},{"name":"Xiamen Yealink Network Technology Company Limited, No. 666, Hu\u2019an Road, High-Tech Park, Huli District, Xiamen 361015, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,4,23]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Avola, D., Cinque, L., Fagioli, A., Foresti, G.L., Pannone, D., and Piciarelli, C. (2020). Bodyprint\u2014A meta-feature based LSTM hashing model for person re-identification. Sensors, 20.","DOI":"10.3390\/s20185365"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Paolanti, M., Romeo, L., Liciotti, D., Pietrini, R., Cenci, A., Frontoni, E., and Zingaretti, P. (2018). Person re-identification with RGB-D camera in top-view configuration through multiple nearest neighbor classifiers and neighborhood component features selection. Sensors, 18.","DOI":"10.3390\/s18103471"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Uddin, M.K., Bhuiyan, A., Bappee, F.K., Islam, M.M., and Hasan, M. (2023). Person Re-Identification with RGB\u2013D and RGB\u2013IR Sensors: A Comprehensive Survey. Sensors, 23.","DOI":"10.3390\/s23031504"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"3162","DOI":"10.3390\/math9243162","article-title":"Trends in vehicle re-identification past, present, and future: A comprehensive review","volume":"9","author":"Deng","year":"2021","journal-title":"Mathematics"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Zhu, X., Luo, Z., Fu, P., and Ji, X. (2020, January 14\u201319). Voc-reid: Vehicle re-identification based on vehicle-orientation-camera. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPRW50498.2020.00309"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Wang, Z., Tang, L., Liu, X., Yao, Z., Yi, S., Shao, J., Yan, J., Wang, S., Li, H., and Wang, X. (2017, January 22\u201329). Orientation invariant feature embedding and spatial temporal regularization for vehicle re-identification. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.49"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Meng, D., Li, L., Wang, S., Gao, X., Zha, Z.J., and Huang, Q. (2020, January 12\u201316). Fine-grained feature alignment with part perspective transformation for vehicle reid. Proceedings of the ACM International Conference on Multimedia, Seattle, WA, USA.","DOI":"10.1145\/3394171.3413573"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Zhou, Y., and Shao, L. (2018, January 18\u201322). Aware attentive multi-view inference for vehicle re-identification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00679"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"410","DOI":"10.1109\/TITS.2019.2901312","article-title":"Vehicle re-identification using quadruple directional deep learning features","volume":"21","author":"Zhu","year":"2019","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"50","DOI":"10.1016\/j.cviu.2019.03.001","article-title":"A survey of advances in vision-based vehicle re-identification","volume":"182","author":"Khan","year":"2019","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"2872","DOI":"10.1109\/TPAMI.2021.3054775","article-title":"Deep learning for person re-identification: A survey and outlook","volume":"44","author":"Ye","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Yang, Q., Wang, P., Fang, Z., and Lu, Q. (2020). Focus on the visible regions: Semantic-guided alignment model for occluded person re-identification. Sensors, 20.","DOI":"10.3390\/s20164431"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Chen, Y., Yang, T., Li, C., and Zhang, Y. (2020). A Binarized segmented ResNet based on edge computing for re-identification. Sensors, 20.","DOI":"10.3390\/s20236902"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Si, R., Zhao, J., Tang, Y., and Yang, S. (2021). Relation-based deep attention network with hybrid memory for one-shot person re-identification. Sensors, 21.","DOI":"10.3390\/s21155113"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"8222","DOI":"10.3390\/s130708222","article-title":"On the use of simple geometric descriptors provided by RGB-D sensors for re-identification","volume":"13","year":"2013","journal-title":"Sensors"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"He, S., Luo, H., Wang, P., Wang, F., Li, H., and Jiang, W. (2021, January 11\u201317). Transreid: Transformer-based object re-identification. Proceedings of the IEEE International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.01474"},{"key":"ref_17","unstructured":"Li, H., Li, C., Zhu, X., Zheng, A., and Luo, B. (2020, January 7\u201312). Multi-spectral vehicle re-identification: A challenge. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA."},{"key":"ref_18","unstructured":"Zheng, A., Wang, Z., Chen, Z., Li, C., and Tang, J. (2021, January 2\u20139). Robust multi-modality person re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada."},{"key":"ref_19","unstructured":"Zheng, A., Zhu, X., Li, C., Tang, J., and Ma, J. (2022). Multi-spectral Vehicle Re-identification with Cross-directional Consistency Network and a High-quality Benchmark. arXiv."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Wang, Z., Li, C., Zheng, A., He, R., and Tang, J. (2022, January 17\u201319). Interact, embed, and enlarge: Boosting modality-specific representations for multi-modal person re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, Virginia, VA, USA.","DOI":"10.1609\/aaai.v36i3.20165"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Guo, J., Zhang, X., Liu, Z., and Wang, Y. (2022, January 21\u201324). Generative and Attentive Fusion for Multi-spectral Vehicle Re-Identification. Proceedings of the International Conference on Intelligent Computing and Signal Processing, Beijing, China.","DOI":"10.1109\/ICSP54964.2022.9778769"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Kamenou, E., Rincon, J., Miller, P., and Devlin-Hill, P. (2022, January 21\u201325). Closing the Domain Gap for Cross-modal Visible-Infrared Vehicle Re-identification. Proceedings of the International Conference on Pattern Recognition, Montr\u00e9al, QC, Canada.","DOI":"10.1109\/ICPR56361.2022.9956381"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Pan, W., Wu, H., Zhu, J., Zeng, H., and Zhu, X. (2022, January 27\u201328). H-ViT: Hybrid Vision Transformer for Multi-modal Vehicle Re-identification. Proceedings of the CAAI International Conference on Artificial Intelligence, Beijing, China.","DOI":"10.1007\/978-3-031-20497-5_21"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Zhang, G., Zhang, P., Qi, J., and Lu, H. (2021, January 20\u201324). Hat: Hierarchical aggregation transformers for person re-identification. Proceedings of the ACM International Conference on Multimedia, Chengdu, China.","DOI":"10.1145\/3474085.3475202"},{"key":"ref_25","unstructured":"Khorramshahi, P., Kumar, A., Peri, N., Rambhatla, S.S., Chen, J.C., and Chellappa, R. (November, January 27). A dual-path model with adaptive attention for vehicle re-identification. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"4328","DOI":"10.1109\/TIP.2019.2910408","article-title":"Two-level attention network with multi-grain ranking loss for vehicle re-identification","volume":"28","author":"Guo","year":"2019","journal-title":"IEEE Trans. Image Process."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"919","DOI":"10.1109\/TMM.2021.3134839","article-title":"Exploiting Multi-view Part-wise Correlation via an Efficient Transformer for Vehicle Re-Identification","volume":"25","author":"Li","year":"2021","journal-title":"IEEE Trans. Multimed."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Gu, X., Chang, H., Ma, B., Bai, S., Shan, S., and Chen, X. (2022, January 19\u201324). Clothes-changing person re-identification with rgb modality only. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00113"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"1291","DOI":"10.3390\/app9071291","article-title":"Efficient and deep vehicle re-identification using multi-level feature extraction","volume":"9","author":"Cai","year":"2019","journal-title":"Appl. Sci."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"3064","DOI":"10.1109\/TMM.2020.2969782","article-title":"Illumination-adaptive person re-identification","volume":"22","author":"Zeng","year":"2020","journal-title":"IEEE Trans. Multimed."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Da Xu, R.Y., Jiang, S., Li, Y., Huang, C., and Deng, C. (2020, January 25\u201328). Illumination adaptive person reid based on teacher-student model and adversarial training. Proceedings of the 2020 IEEE International Conference on Image Processing, Abu Dhabi, United Arab Emirates.","DOI":"10.1109\/ICIP40778.2020.9190796"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"2278","DOI":"10.1109\/5.726791","article-title":"Gradient-based learning applied to document recognition","volume":"86","author":"LeCun","year":"1998","journal-title":"IEEE"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1145\/3065386","article-title":"Imagenet classification with deep convolutional neural networks","volume":"60","author":"Krizhevsky","year":"2017","journal-title":"Commun. ACM"},{"key":"ref_34","unstructured":"Simonyan, K., and Zisserman, A. (2015, January 7\u20139). Very deep convolutional networks for large-scale image recognition. Proceedings of the International Conference on Learning Representations, San Diego, CA, USA."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_36","unstructured":"Ioffe, S., and Szegedy, C. (2015, January 6\u201311). Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the International Conference on Machine Learning, Nord, France."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016, January 27\u201330). Rethinking the inception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A.A. (2017, January 4\u20139). Inception-v4, inception-resnet and the impact of residual connections on learning. Proceedings of the Thirty-first AAAI Conference on Artificial Intelligence, San Francisco, CA, USA.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"333","DOI":"10.1016\/j.neucom.2022.01.008","article-title":"Online multi-object tracking with unsupervised re-identification learning and occlusion estimation","volume":"483","author":"Liu","year":"2022","journal-title":"Neurocomputing"},{"key":"ref_41","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_42","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2021, January 3\u20137). An image is worth 16 \u00d7 16 words: Transformers for image recognition at scale. Proceedings of the International Conference on Learning Representations, Vienna, Austria."},{"key":"ref_43","unstructured":"Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and J\u00e9gou, H. (2021, January 18\u201324). Training data-efficient image transformers & distillation through attention. Proceedings of the International Conference on Machine Learning, Virtual Only."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 11\u201317). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., and Torr, P.H. (2021, January 19\u201325). Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00681"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Wang, W., Xie, E., Li, X., Fan, D.P., Song, K., Liang, D., Lu, T., Luo, P., and Shao, L. (2021, January 11\u201317). Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. Proceedings of the IEEE International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00061"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"415","DOI":"10.1007\/s41095-022-0274-8","article-title":"Pvt v2: Improved baselines with pyramid vision transformer","volume":"8","author":"Wang","year":"2022","journal-title":"Comput. Vis. Media"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Wang, H., Shen, J., Liu, Y., Gao, Y., and Gavves, E. (2022, January 19\u201325). Nformer: Robust person re-identification with neighbor transformer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00715"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"87","DOI":"10.1109\/TPAMI.2022.3152247","article-title":"A survey on vision transformer","volume":"45","author":"Han","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","article-title":"Imagenet large scale visual recognition challenge","volume":"115","author":"Russakovsky","year":"2015","journal-title":"Int. J. Comput. Vis."},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Xie, S., Girshick, R., Doll\u00e1r, P., Tu, Z., and He, K. (2017, January 21\u201326). Aggregated residual transformations for deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.634"},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201322). Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Wu, Y.H., Liu, Y., Zhan, X., and Cheng, M.M. (2022). P2T: Pyramid pooling transformer for scene understanding. IEEE Trans. Pattern Anal. Mach. Intell., 1\u201312.","DOI":"10.1109\/TPAMI.2022.3202765"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"2352","DOI":"10.1109\/TIP.2022.3141868","article-title":"Structure-aware positional transformer for visible-infrared person re-identification","volume":"31","author":"Chen","year":"2022","journal-title":"IEEE Trans. Image Process."},{"key":"ref_55","unstructured":"Zhong, Z., Zheng, L., Kang, G., Li, S., and Yang, Y. (2020, January 7\u201312). Random erasing data augmentation. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA."},{"key":"ref_56","first-page":"5056","article-title":"Learning generalisable omni-scale representations for person re-identification","volume":"44","author":"Zhou","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_57","unstructured":"Chen, M., Wang, Z., and Zheng, F. (2021). Benchmarks for corruption invariant person re-identification. arXiv."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Li, Q., Yu, Z., Wang, Y., and Zheng, H. (2020). TumorGAN: A multi-modal data augmentation framework for brain tumor segmentation. Sensors, 20.","DOI":"10.3390\/s20154203"},{"key":"ref_59","doi-asserted-by":"crossref","first-page":"107572","DOI":"10.1016\/j.compeleceng.2021.107572","article-title":"Enhanced air quality prediction by edge-based spatiotemporal data preprocessing","volume":"96","author":"Ojagh","year":"2021","journal-title":"Comput. Electr. Eng."},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"115826","DOI":"10.1109\/ACCESS.2021.3100571","article-title":"A Color\/Illuminance Aware Data Augmentation and Style Adaptation Approach to Person Re-Identification","volume":"9","author":"Lin","year":"2021","journal-title":"IEEE Access"},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Huang, H., Li, D., Zhang, Z., Chen, X., and Huang, K. (2018, January 18\u201322). Adversarially occluded samples for person re-identification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00535"},{"key":"ref_62","doi-asserted-by":"crossref","unstructured":"Schroff, F., Kalenichenko, D., and Philbin, J. (2015, January 7\u201312). Facenet: A unified embedding for face recognition and clustering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298682"},{"key":"ref_63","unstructured":"Gray, D., Brennan, S., and Tao, H. (2007, January 14). Evaluating appearance models for recognition, reacquisition, and tracking. Proceedings of the IEEE International Workshop on Performance Evaluation for Tracking and Surveillance, Arusha, Tanzanian."},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., and Tian, Q. (2015, January 13\u201316). Scalable person re-identification: A benchmark. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.133"},{"key":"ref_65","unstructured":"Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., and Antiga, L. (2019, January 8\u201314). Pytorch: An imperative style, high-performance deep learning library. Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada."},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Zhao, H., Jia, J., and Koltun, V. (2020, January 14\u201318). Exploring self-attention for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01009"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/9\/4206\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:21:32Z","timestamp":1760124092000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/9\/4206"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,23]]},"references-count":66,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2023,5]]}},"alternative-id":["s23094206"],"URL":"https:\/\/doi.org\/10.3390\/s23094206","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,4,23]]}}}