{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T14:24:46Z","timestamp":1774535086090,"version":"3.50.1"},"reference-count":35,"publisher":"MDPI AG","issue":"22","license":[{"start":{"date-parts":[[2023,11,20]],"date-time":"2023-11-20T00:00:00Z","timestamp":1700438400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["62265017"],"award-info":[{"award-number":["62265017"]}]},{"name":"National Natural Science Foundation of China","award":["YJSJJ23-B181"],"award-info":[{"award-number":["YJSJJ23-B181"]}]},{"name":"Yunnan Normal University graduate research innovation fund project","award":["62265017"],"award-info":[{"award-number":["62265017"]}]},{"name":"Yunnan Normal University graduate research innovation fund project","award":["YJSJJ23-B181"],"award-info":[{"award-number":["YJSJJ23-B181"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Aiming at challenges such as the high complexity of the network model, the large number of parameters, and the slow speed of training and testing in cross-view gait recognition, this paper proposes a solution: Multi-teacher Joint Knowledge Distillation (MJKD). The algorithm employs multiple complex teacher models to train gait images from a single view, extracting inter-class relationships that are then weighted and integrated into the set of inter-class relationships. These relationships guide the training of a lightweight student model, improving its gait feature extraction capability and recognition accuracy. To validate the effectiveness of the proposed Multi-teacher Joint Knowledge Distillation (MJKD), the paper performs experiments on the CASIA_B dataset using the ResNet network as the benchmark. The experimental results show that the student model trained by Multi-teacher Joint Knowledge Distillation (MJKD) achieves 98.24% recognition accuracy while significantly reducing the number of parameters and computational cost.<\/jats:p>","DOI":"10.3390\/s23229289","type":"journal-article","created":{"date-parts":[[2023,11,20]],"date-time":"2023-11-20T11:31:36Z","timestamp":1700479896000},"page":"9289","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Cross-View Gait Recognition Method Based on Multi-Teacher Joint Knowledge Distillation"],"prefix":"10.3390","volume":"23","author":[{"given":"Ruoyu","family":"Li","sequence":"first","affiliation":[{"name":"College of Information, Yunnan Normal University, Kunming 650500, China"},{"name":"Engineering Research Center of Computer Vision and Intelligent Control Technology, Department of Education, Kunming 650500, China"}]},{"given":"Lijun","family":"Yun","sequence":"additional","affiliation":[{"name":"College of Information, Yunnan Normal University, Kunming 650500, China"},{"name":"Engineering Research Center of Computer Vision and Intelligent Control Technology, Department of Education, Kunming 650500, China"}]},{"given":"Mingxuan","family":"Zhang","sequence":"additional","affiliation":[{"name":"Xi\u2019an Institute of Applied Optics, Xi\u2019an 710000, China"}]},{"given":"Yanchen","family":"Yang","sequence":"additional","affiliation":[{"name":"College of Information, Yunnan Normal University, Kunming 650500, China"},{"name":"Engineering Research Center of Computer Vision and Intelligent Control Technology, Department of Education, Kunming 650500, China"}]},{"given":"Feiyan","family":"Cheng","sequence":"additional","affiliation":[{"name":"College of Information, Yunnan Normal University, Kunming 650500, China"},{"name":"Engineering Research Center of Computer Vision and Intelligent Control Technology, Department of Education, Kunming 650500, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,11,20]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"70497","DOI":"10.1109\/ACCESS.2018.2879896","article-title":"Vision-Based Gait Recognition: A Survey","volume":"6","author":"Singh","year":"2018","journal-title":"IEEE Access"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3230633","article-title":"A survey on gait recognition","volume":"51","author":"Wan","year":"2018","journal-title":"ACM Comput. Surv. (CSUR)"},{"key":"ref_3","first-page":"264","article-title":"Deep gait recognition: A survey","volume":"45","author":"Etemad","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_4","first-page":"261","article-title":"Research on infrared human gait recognition based on long short term memory network","volume":"59","author":"Mei","year":"2022","journal-title":"Laser Optoelectron. Prog."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"71","DOI":"10.5768\/JAO202344.0102002","article-title":"Research on gait recognition in infrared human body image based on improved ViT","volume":"44","author":"Yang","year":"2023","journal-title":"J. Appl. Opt."},{"key":"ref_6","unstructured":"Zhu, Z., Guo, X., Yang, T., Huang, J., Deng, J., Huang, G., Du, D., Lu, J., and Zhou, J. (2021, January 11\u201317). Gait recognition in the wild: A benchmark. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"1265","DOI":"10.11834\/jig.220458","article-title":"A review of cross-view gait recognition","volume":"28","author":"Xu","year":"2023","journal-title":"J. Image Graph."},{"key":"ref_8","unstructured":"Koch, G., Zemel, R., and Salakhutdinov, R. (2015, January 6\u201311). Siamese neural networks for one-shot image recognition. Proceedings of the ICML Deep Learning Workshop, Lille, France."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Tome, D., Russell, C., and Agapito, L. (2017, January 21\u201326). Lifting from the deep: Convolutional 3D pose estimation from a single image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.603"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Weng, J., Liu, M., Jiang, X., and Yuan, J. (2018, January 8\u201314). Deformable pose traversal convolution for 3d action and gesture recognition. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_9"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"209","DOI":"10.1109\/TPAMI.2016.2545669","article-title":"A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs","volume":"39","author":"Wu","year":"2017","journal-title":"IEEE Trans. Pattern. Anal. Mach. Intell."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"228","DOI":"10.1016\/j.patcog.2019.04.023","article-title":"A comprehensive study on gait biometrics using a joint CNN-based method","volume":"93","author":"Zhang","year":"2019","journal-title":"Pattern Recognit."},{"key":"ref_13","first-page":"8126","article-title":"Gaitset: Regarding gait as a set for cross-view gait recognition","volume":"33","author":"Chao","year":"2019","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"106988","DOI":"10.1016\/j.patcog.2019.106988","article-title":"Gaitnet: An end-to-end network for gait based human identification","volume":"96","author":"Song","year":"2019","journal-title":"Pattern Recognit."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Fan, C., Peng, Y., Cao, C., Liu, X., Hou, S., Chi, J., Huang, Y., Li, Q., and He, Z. (2020, January 13\u201319). Gaitpart: Temporal part-based model for gait recognition. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01423"},{"key":"ref_16","unstructured":"Yu, S., Tan, D., and Tan, T. (2006, January 20\u201324). A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. Proceedings of the IEEE 18th International Conference on Pattern Recognition (ICPR\u201906), Hong Kong, China."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Fan, C., Liang, J., Shen, C., Hou, S., Huang, Y., and Yu, S. (2023, January 18\u201322). OpenGait: Revisiting Gait Recognition Towards Better Practicality. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00936"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Zheng, J., Liu, X., Liu, W., He, L., Yan, C., and Mei, T. (2022, January 18\u201324). Gait recognition in the wild with dense 3d representations and a benchmark. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01959"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Teepe, T., Khan, A., Gilg, J., Herzog, F., Hormann, S., and Rigoll, G. (2021, January 19\u201322). Gaitgraph: Graph convolutional network for skeleton-based gait recognition. Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA.","DOI":"10.1109\/ICIP42928.2021.9506717"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Zhang, S., Wang, Y., and Li, A. (2021, January 19\u201325). Cross-view gait recognition with deep universal linear embeddings. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00898"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"316","DOI":"10.1109\/TPAMI.2006.38","article-title":"Individual recognition using gait energy image","volume":"28","author":"Han","year":"2005","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"2295","DOI":"10.1016\/j.sigpro.2010.01.024","article-title":"Active energy image plus 2DLPP for gait recognition","volume":"90","author":"Zhang","year":"2010","journal-title":"Signal Process."},{"key":"ref_23","unstructured":"Zhang, E.H., Ma, H.B., Lu, J.W., and Chen, Y.J. (2009, January 12\u201315). Gait recognition using dynamic gait energy and PCA+ LPP method. Proceedings of the 2009 International Conference on Machine Learning and Cybernetics, Baoding, China."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1016\/j.neucom.2017.02.006","article-title":"Invariant feature extraction for gait recognition using only one uniform model","volume":"239","author":"Yu","year":"2017","journal-title":"Neurocomputing"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Yu, S., Chen, H., Garcia Reyes, E.B., and Poh, N. (2017, January 21\u201326). Gaitgan: Invariant gait feature extraction using generative adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.80"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"102","DOI":"10.1109\/TIFS.2018.2844819","article-title":"Multi-task GANs for view-specific feature learning in gait recognition","volume":"14","author":"He","year":"2018","journal-title":"IEEE Trans. Inf. Forensics Secur."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"245","DOI":"10.1016\/j.neucom.2019.02.025","article-title":"Learning view invariant gait features with two-stream GAN","volume":"339","author":"Wang","year":"2019","journal-title":"Neurocomputing"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"e474","DOI":"10.7717\/peerj-cs.474","article-title":"Knowledge distillation in deep learning and its applications","volume":"7","author":"Alkhulaifi","year":"2021","journal-title":"PeerJ Comput. Sci."},{"key":"ref_29","unstructured":"Ba, J., and Caruana, R. (2014). Do deep nets really need to be deep?. Adv. Neural Inf. Process. Syst., 27."},{"key":"ref_30","unstructured":"Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling the knowledge in a neural network. arXiv."},{"key":"ref_31","unstructured":"Sau, B.B., and Balasubramanian, V.N. (2016). Deep model compression: Distilling knowledge from noisy teachers. arXiv."},{"key":"ref_32","unstructured":"Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., and Bengio, Y. (2014). Fitnets: Hints for thin deep nets. arXiv."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"You, S., Xu, C., Xu, C., and Tao, D. (2017, January 13\u201317). Learning from multiple teacher networks. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada.","DOI":"10.1145\/3097983.3098135"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Yim, J., Joo, D., Bae, J.-H., and Kim, J. (2017, January 21\u201326). A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.754"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/22\/9289\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T21:26:31Z","timestamp":1760131591000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/22\/9289"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,11,20]]},"references-count":35,"journal-issue":{"issue":"22","published-online":{"date-parts":[[2023,11]]}},"alternative-id":["s23229289"],"URL":"https:\/\/doi.org\/10.3390\/s23229289","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,11,20]]}}}