{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T14:18:11Z","timestamp":1775225891589,"version":"3.50.1"},"reference-count":33,"publisher":"MDPI AG","issue":"1","license":[{"start":{"date-parts":[[2024,1,22]],"date-time":"2024-01-22T00:00:00Z","timestamp":1705881600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Singapore Ministry of Education","award":["AcRF Tier 1 RG91\/22"],"award-info":[{"award-number":["AcRF Tier 1 RG91\/22"]}]},{"name":"Singapore Ministry of Education","award":["NTU startup fund"],"award-info":[{"award-number":["NTU startup fund"]}]},{"name":"Singapore Ministry of Education","award":["202206450035"],"award-info":[{"award-number":["202206450035"]}]},{"name":"Singapore Ministry of Education","award":["62072469"],"award-info":[{"award-number":["62072469"]}]},{"name":"China Scholarship Council","award":["AcRF Tier 1 RG91\/22"],"award-info":[{"award-number":["AcRF Tier 1 RG91\/22"]}]},{"name":"China Scholarship Council","award":["NTU startup fund"],"award-info":[{"award-number":["NTU startup fund"]}]},{"name":"China Scholarship Council","award":["202206450035"],"award-info":[{"award-number":["202206450035"]}]},{"name":"China Scholarship Council","award":["62072469"],"award-info":[{"award-number":["62072469"]}]},{"name":"National Natural Science Foundation of China","award":["AcRF Tier 1 RG91\/22"],"award-info":[{"award-number":["AcRF Tier 1 RG91\/22"]}]},{"name":"National Natural Science Foundation of China","award":["NTU startup fund"],"award-info":[{"award-number":["NTU startup fund"]}]},{"name":"National Natural Science Foundation of China","award":["202206450035"],"award-info":[{"award-number":["202206450035"]}]},{"name":"National Natural Science Foundation of China","award":["62072469"],"award-info":[{"award-number":["62072469"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based on its environment, making it difficult to perform federated learning in a heterogeneous model environment. Some knowledge distillation methods address the problem of heterogeneous model fusion to some extent. However, these methods assume that each client is trustworthy. Some clients may produce malicious or low-quality knowledge, making it difficult to aggregate trustworthy knowledge in a heterogeneous environment. To address these challenges, we propose a trustworthy heterogeneous federated learning framework (FedTKD) to achieve client identification and trustworthy knowledge fusion. Firstly, we propose a malicious client identification method based on client logit features, which can exclude malicious information in fusing global logit. Then, we propose a selectivity knowledge fusion method to achieve high-quality global logit computation. Additionally, we propose an adaptive knowledge distillation method to improve the accuracy of knowledge transfer from the server side to the client side. Finally, we design different attack and data distribution scenarios to validate our method. The experiment shows that our method outperforms the baseline methods, showing stable performance in all attack scenarios and achieving an accuracy improvement of 2% to 3% in different data distributions.<\/jats:p>","DOI":"10.3390\/e26010096","type":"journal-article","created":{"date-parts":[[2024,1,23]],"date-time":"2024-01-23T08:28:28Z","timestamp":1705998508000},"page":"96","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":14,"title":["FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation"],"prefix":"10.3390","volume":"26","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7701-6313","authenticated-orcid":false,"given":"Leiming","family":"Chen","sequence":"first","affiliation":[{"name":"School of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9800-1068","authenticated-orcid":false,"given":"Weishan","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-5769-6263","authenticated-orcid":false,"given":"Cihao","family":"Dong","sequence":"additional","affiliation":[{"name":"School of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3637-4939","authenticated-orcid":false,"given":"Dehai","family":"Zhao","sequence":"additional","affiliation":[{"name":"CSIRO\u2019Data61, Sydney 2015, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4371-0953","authenticated-orcid":false,"given":"Xingjie","family":"Zeng","sequence":"additional","affiliation":[{"name":"School of Computer Science, Southwest Petroleum University, Chengdu 610500, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6922-5986","authenticated-orcid":false,"given":"Sibo","family":"Qiao","sequence":"additional","affiliation":[{"name":"School of Software, Tiangong University, Tianjin 300387, China"}]},{"given":"Yichang","family":"Zhu","sequence":"additional","affiliation":[{"name":"School of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6624-9752","authenticated-orcid":false,"given":"Chee Wei","family":"Tan","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore"}]}],"member":"1968","published-online":{"date-parts":[[2024,1,22]]},"reference":[{"key":"ref_1","unstructured":"McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B.A. (2017, January 20\u201322). Communication-efficient learning of deep networks from decentralized data. Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA."},{"key":"ref_2","unstructured":"Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., and Suresh, A.T. (2020, January 13\u201318). Scaffold: Stochastic controlled averaging for federated learning. Proceedings of the International Conference on Machine Learning, Virtual."},{"key":"ref_3","first-page":"429","article-title":"Federated optimization in heterogeneous networks","volume":"2","author":"Li","year":"2020","journal-title":"Proc. Mach. Learn. Syst."},{"key":"ref_4","unstructured":"Xie, C., Koyejo, S., and Gupta, I. (2019). Asynchronous federated optimization. arXiv."},{"key":"ref_5","unstructured":"Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling the knowledge in a neural network. arXiv."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Fukuda, T., Suzuki, M., Kurata, G., Thomas, S., Cui, J., and Ramabhadran, B. (2017, January 20\u201324). Efficient Knowledge Distillation from an Ensemble of Teachers. Proceedings of the Interspeech, Stockholm, Sweden.","DOI":"10.21437\/Interspeech.2017-614"},{"key":"ref_7","unstructured":"Li, D., and Wang, J. (2019). Fedmd: Heterogenous federated learning via model distillation. arXiv."},{"key":"ref_8","first-page":"2351","article-title":"Ensemble distillation for robust model fusion in federated learning","volume":"33","author":"Lin","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Jiang, D., Shan, C., and Zhang, Z. (2020, January 23\u201325). Federated learning algorithm based on knowledge distillation. Proceedings of the 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE), Beijing, China.","DOI":"10.1109\/ICAICE51518.2020.00038"},{"key":"ref_10","unstructured":"Zhu, Z., Hong, J., and Zhou, J. (2021, January 18\u201324). Data-free knowledge distillation for heterogeneous federated learning. Proceedings of the International Conference on Machine Learning, Virtual."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Zhang, L., Shen, L., Ding, L., Tao, D., and Duan, L.Y. (2022, January 18\u201324). Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00993"},{"key":"ref_12","unstructured":"Zhang, Z., Shen, T., Zhang, J., and Wu, C. (2022). Feddtg: Federated data-free knowledge distillation via three-player generative adversarial networks. arXiv."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Lu, Q., Zhu, L., Xu, X., Whittle, J., and Xing, Z. (2022, January 16\u201317). Towards a roadmap on software engineering for responsible AI. Proceedings of the 1st International Conference on AI Engineering: Software Engineering for AI, Pittsburgh, PA, USA.","DOI":"10.1145\/3522664.3528607"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"63","DOI":"10.1109\/MS.2022.3233582","article-title":"Responsible-AI-by-design: A pattern collection for designing responsible AI systems","volume":"40","author":"Lu","year":"2023","journal-title":"IEEE Softw."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Chen, L., Zhang, W., Xu, L., Zeng, X., Lu, Q., Zhao, H., Chen, B., and Wang, X. (August, January 15). A Federated Parallel Data Platform for Trustworthy AI. Proceedings of the 2021 IEEE 1st International Conference on Digital Twins and Parallel Intelligence (DTPI), Beijing, China.","DOI":"10.1109\/DTPI52967.2021.9540175"},{"key":"ref_16","first-page":"7611","article-title":"Tackling the objective inconsistency problem in heterogeneous federated optimization","volume":"33","author":"Wang","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Li, Q., He, B., and Song, D. (2021, January 19\u201325). Model-contrastive federated learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01057"},{"key":"ref_18","unstructured":"Seo, H., Park, J., Oh, S., Bennis, M., and Kim, S.L. (2022). Machine Learning and Wireless Communications, Cambridge University Press."},{"key":"ref_19","unstructured":"Chen, H., and Vikalo, H. (2023). The Best of Both Worlds: Accurate Global and Personalized Models through Federated Learning with Data-Free Hyper-Knowledge Distillation. arXiv."},{"key":"ref_20","unstructured":"Li, S., Cheng, Y., Wang, W., Liu, Y., and Chen, T. (2020). Learning to detect malicious clients for robust federated learning. arXiv."},{"key":"ref_21","unstructured":"Chen, L., Dong, C., Qiao, S., Huang, Z., Nie, Y., Hou, Z., and Tan, C. (2023). FedDRL: A Trustworthy Federated Learning Model Fusion Method Based on Staged Reinforcement Learning. arXiv."},{"key":"ref_22","unstructured":"Blanchard, P., El Mhamdi, E.M., Guerraoui, R., and Stainer, J. (2017, January 4\u20139). Machine learning with adversaries: Byzantine tolerant gradient descent. Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA."},{"key":"ref_23","first-page":"1","article-title":"A Credible and Fair Federated Learning Framework Based on Blockchain","volume":"1","author":"Chen","year":"2024","journal-title":"IEEE Trans. Artif. Intell."},{"key":"ref_24","unstructured":"Yin, D., Chen, Y., Kannan, R., and Bartlett, P. (2018, January 10\u201315). Byzantine-robust distributed learning: Towards optimal statistical rates. Proceedings of the International Conference on Machine Learning, Stockholm, Sweden."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Li, S., Ngai, E.C.H., and Voigt, T. (2023). An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning. arXiv.","DOI":"10.36227\/techrxiv.19560325.v1"},{"key":"ref_26","unstructured":"Karimireddy, S.P., He, L., and Jaggi, M. (2021, January 18\u201324). Learning from history for byzantine robust optimization. Proceedings of the International Conference on Machine Learning, Virtual."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"6388","DOI":"10.1109\/TII.2021.3132954","article-title":"RobustFL: Robust federated learning against poisoning attacks in industrial IoT systems","volume":"18","author":"Zhang","year":"2021","journal-title":"IEEE Trans. Ind. Inform."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Wang, Y., Xie, L., Liu, X., Yin, J.L., and Zheng, T. (2021, January 19\u201322). Model-agnostic adversarial example detection through logit distribution learning. Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA.","DOI":"10.1109\/ICIP42928.2021.9506292"},{"key":"ref_29","unstructured":"Cheng, S., Wu, J., Xiao, Y., and Liu, Y. (2021). Fedgems: Federated learning of larger server models via selective knowledge fusion. arXiv."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Zhang, H., Chen, D., and Wang, C. (2022, January 23\u201327). Confidence-aware multi-teacher knowledge distillation. Proceedings of the ICASSP 2022\u20142022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore.","DOI":"10.1109\/ICASSP43922.2022.9747534"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"He, Y., Chen, Y., Yang, X., Zhang, Y., and Zeng, B. (March, January 22). Class-wise adaptive self distillation for heterogeneous federated learning. Proceedings of the 36th AAAI Conference on Artificial Intelligence, Virtual.","DOI":"10.1609\/aaai.v36i11.21620"},{"key":"ref_32","unstructured":"Lukasik, M., Bhojanapalli, S., Menon, A.K., and Kumar, S. (2021). Teacher\u2019s pet: Understanding and mitigating biases in distillation. arXiv."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Chan, Y.H., and Ngai, E.C. (2021, January 13\u201315). Fedhe: Heterogeneous models and communication-efficient federated learning. Proceedings of the 2021 17th International Conference on Mobility, Sensing and Networking (MSN), Exeter, UK.","DOI":"10.1109\/MSN53354.2021.00043"}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/26\/1\/96\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T13:47:29Z","timestamp":1760104049000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/26\/1\/96"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,22]]},"references-count":33,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,1]]}},"alternative-id":["e26010096"],"URL":"https:\/\/doi.org\/10.3390\/e26010096","relation":{},"ISSN":["1099-4300"],"issn-type":[{"value":"1099-4300","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,22]]}}}