{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,21]],"date-time":"2025-11-21T00:57:25Z","timestamp":1763686645856,"version":"3.45.0"},"reference-count":26,"publisher":"Academy of Cognitive and Natural Sciences","issue":"2","license":[{"start":{"date-parts":[[2025,11,21]],"date-time":"2025-11-21T00:00:00Z","timestamp":1763683200000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["J. Edge Comp."],"abstract":"<jats:p>Transferring knowledge from multiple teacher models to a compact student model is often hindered by domain shifts between datasets and a scarcity of labeled target data, degrading performance. While existing methods address parts of this problem, a unified framework is lacking. In this work, we improve multi-teacher knowledge distillation by developing a holistic framework, enhanced multi-teacher knowledge distillation (EMTKD), that synergistically integrates three components: domain adaptation within teacher training, an instance-specific adaptive weighting mechanism for knowledge fusion, and semi-supervised learning to leverage unlabeled data. On a challenging cross-domain cardiac MRI benchmark, EMTKD achieves a target domain accuracy of 88.5% and an area under the curve of 92.5%, outperforming state-of-the-art techniques by up to 5.0%. Our results demonstrate that this integrated, adaptive approach yields significantly more robust and accurate student models, enabling effective deep learning deployment in data-scarce environments.<\/jats:p>","DOI":"10.55056\/jec.978","type":"journal-article","created":{"date-parts":[[2025,8,17]],"date-time":"2025-08-17T16:25:18Z","timestamp":1755447918000},"page":"159-178","source":"Crossref","is-referenced-by-count":1,"title":["Method of adaptive knowledge distillation from multi-teacher to student deep learning models"],"prefix":"10.55056","volume":"4","author":[{"ORCID":"https:\/\/orcid.org\/0009-0001-4710-3336","authenticated-orcid":false,"given":"Oleksandr","family":"Chaban","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7310-2126","authenticated-orcid":false,"given":"Eduard","family":"Manziuk","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3609-112X","authenticated-orcid":false,"given":"Pavlo","family":"Radiuk","sequence":"additional","affiliation":[]}],"member":"33647","published-online":{"date-parts":[[2025,11,21]]},"reference":[{"doi-asserted-by":"publisher","key":"86310","DOI":"10.1109\/TMI.2018.2837502"},{"doi-asserted-by":"publisher","key":"86311","DOI":"10.1109\/TMI.2021.3090082"},{"unstructured":"Chaban, O., Manziuk, E., Markevych, O., Petrovskyi, S. and Radiuk, P., 2025. EMTKD at the edge: An adaptive multi-teacher knowledge distillation for robust cardiac MRI classification. In: T.A. Vakaliuk and S.O. Semerikov, eds. Proceedings of the 5th Edge Computing Workshop (doors 2025), Zhytomyr, Ukraine, April 4, 2025, CEUR Workshop Proceedings, vol. 3943. CEUR-WS.org, pp.42\u201357. Available from: https:\/\/ceur-ws.org\/Vol-3943\/paper09.pdf.","key":"86312"},{"doi-asserted-by":"publisher","key":"86313","DOI":"10.1007\/978-3-030-32248-9_51"},{"unstructured":"Du, S., You, S., Li, X., Wu, J., Wang, F., Qian, C. and Zhang, C., 2020. Agree to disagree: Adaptive ensemble knowledge distillation in gradient space. Proceedings of the 34th International Conference on Neural Information Processing Systems. pp.12345\u201312355. Available from: https:\/\/dl.acm.org\/doi\/10.5555\/3495724.3496759.","key":"86314"},{"unstructured":"Ganin, Y. and Lempitsky, V.S., 2015. Unsupervised domain adaptation by backpropagation. In: F.R. Bach and D.M. Blei, eds. Proceedings of the 32nd International Conference on Machine Learning, JMLR Workshop and Conference Proceedings, vol. 37. JMLR.org, pp.1180\u20131189. Available from: https:\/\/dl.acm.org\/doi\/abs\/10.5555\/3045118.3045244.","key":"86315"},{"doi-asserted-by":"publisher","key":"86316","DOI":"10.1109\/CVPR52688.2022.00968"},{"doi-asserted-by":"publisher","key":"86317","DOI":"10.1109\/CVPR.2016.90"},{"doi-asserted-by":"publisher","key":"86318","DOI":"10.1093\/ehjci\/jead285"},{"unstructured":"Hinton, G., Vinyals, O. and Dean, J., 2015. Distilling the knowledge in a neural network. 1503.02531, Available from: https:\/\/doi.org\/10.48550\/arXiv.1503.02531.","key":"86319"},{"doi-asserted-by":"publisher","key":"86320","DOI":"10.3390\/diagnostics13182947"},{"doi-asserted-by":"publisher","key":"86321","DOI":"10.1148\/radiol.231269"},{"unstructured":"Nabavi, S., Hamedani, K.A., Moghaddam, M.E., Abin, A.A. and Frangi, A.F., 2024. Multiple teachers-meticulous student: A domain adaptive meta-knowledge distillation model for medical image classification. 2403.11226, Available from: https:\/\/doi.org\/10.48550\/arXiv.2403.11226.","key":"86322"},{"doi-asserted-by":"publisher","key":"86323","DOI":"10.1002\/mp.17411"},{"doi-asserted-by":"publisher","key":"86324","DOI":"10.3390\/math12071024"},{"doi-asserted-by":"publisher","key":"86325","DOI":"10.1038\/s41598-024-56706-x"},{"doi-asserted-by":"publisher","key":"86326","DOI":"10.1162\/neco.1992.4.2.234"},{"doi-asserted-by":"publisher","key":"86327","DOI":"10.1109\/ACCESS.2025.3556327"},{"doi-asserted-by":"publisher","key":"86328","DOI":"10.2196\/38178"},{"unstructured":"Sohn, K., Berthelot, D., Carlini, N., Zhang, Z., Zhang, H., Raffel, C.A., Cubuk, E.D., Kurakin, A. and Li, C.L., 2020. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. Proceedings of the 34th International Conference on Neural Information Processing Systems. pp.596\u2013608. Available from: https:\/\/dl.acm.org\/doi\/abs\/10.5555\/3495724.3495775.","key":"86329"},{"doi-asserted-by":"publisher","key":"86330","DOI":"10.1109\/ICCV51070.2023.01602"},{"doi-asserted-by":"publisher","key":"86331","DOI":"10.1007\/978-3-031-72989-8_9"},{"unstructured":"Yang, C., Yu, X., Yang, H., An, Z., Yu, C., Huang, L. and Xu, Y., 2025. Multiteacher knowledge distillation with reinforcement learning for visual recognition. 2502.18510, Available from: https:\/\/doi.org\/10.48550\/arXiv.2502.18510.","key":"86332"},{"doi-asserted-by":"publisher","key":"86333","DOI":"10.1109\/ICASSP43922.2022.9747534"},{"doi-asserted-by":"publisher","key":"86334","DOI":"10.1109\/CVPR52688.2022.02001"},{"unstructured":"Zhong, T., Chi, Z., Gu, L., Wang, Y., Yu, Y. and Tang, J., 2022. Meta-DMoE: Adapting to domain shift by meta-distillation from mixture-of-experts. Proceedings of the 36th International Conference on Neural Information Processing Systems. pp.22243\u201322257. Available from: https:\/\/dl.acm.org\/doi\/10.5555\/3600270.3601886.","key":"86335"}],"container-title":["Journal of Edge Computing"],"original-title":[],"link":[{"URL":"https:\/\/acnsci.org\/journal\/index.php\/jec\/article\/download\/978\/922","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/acnsci.org\/journal\/index.php\/jec\/article\/download\/978\/922","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,20]],"date-time":"2025-11-20T23:51:53Z","timestamp":1763682713000},"score":1,"resource":{"primary":{"URL":"https:\/\/acnsci.org\/journal\/index.php\/jec\/article\/view\/978"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,21]]},"references-count":26,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2025,11,21]]}},"URL":"https:\/\/doi.org\/10.55056\/jec.978","relation":{},"ISSN":["2837-181X"],"issn-type":[{"type":"electronic","value":"2837-181X"}],"subject":[],"published":{"date-parts":[[2025,11,21]]}}}