{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,22]],"date-time":"2026-01-22T05:57:32Z","timestamp":1769061452622,"version":"3.49.0"},"reference-count":51,"publisher":"MDPI AG","issue":"1","license":[{"start":{"date-parts":[[2026,1,21]],"date-time":"2026-01-21T00:00:00Z","timestamp":1768953600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Shenzhen Science and Technology Program","award":["KJZD20240903104 301003"],"award-info":[{"award-number":["KJZD20240903104 301003"]}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["82501368"],"award-info":[{"award-number":["82501368"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["J. Imaging"],"abstract":"<jats:p>Meibomian gland dysfunction (MGD) is a leading cause of dry eye disease, assessable through gland atrophy degree. While deep learning (DL) has advanced meibomian gland (MG) segmentation and MGD classification, existing methods treat these tasks independently and suffer from domain shift across multi-center imaging devices. We propose ADAM-Net, an attention-guided unsupervised domain adaptation multi-task framework that jointly models MG segmentation and MGD classification. Our model introduces structure-aware multi-task learning and anatomy-guided attention to enhance feature sharing, suppress background noise, and improve glandular region perception. For the cross-domain tasks MGD-1K\u2192{K5M, CR-2, LV II}, this study systematically evaluates the overall performance of ADAM-Net from multiple perspectives. The experimental results show that ADAM-Net achieves classification accuracies of 77.93%, 74.86%, and 81.77% on the target domains, significantly outperforming current mainstream unsupervised domain adaptation (UDA) methods. The F1-score and the Matthews correlation coefficient (MCC-score) indicate that the model maintains robust discriminative capability even under class-imbalanced scenarios. t-SNE visualizations further validate its cross-domain feature alignment capability. These demonstrate that ADAM-Net exhibits strong robustness and interpretability in multi-center scenarios and provide an effective solution for automated MGD assessment.<\/jats:p>","DOI":"10.3390\/jimaging12010050","type":"journal-article","created":{"date-parts":[[2026,1,21]],"date-time":"2026-01-21T13:59:54Z","timestamp":1769003994000},"page":"50","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["ADAM-Net: Anatomy-Guided Attentive Unsupervised Domain Adaptation for Joint MG Segmentation and MGD Grading"],"prefix":"10.3390","volume":"12","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0730-8732","authenticated-orcid":false,"given":"Junbin","family":"Fang","sequence":"first","affiliation":[{"name":"Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China"},{"name":"Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Guangzhou 510632, China"},{"name":"Guangdong Provincial Engineering Technology Research Center on Visible Light Communication, Guangzhou 510632, China"},{"name":"Guangzhou Municipal Key Laboratory of Engineering Technology on Visible Light Communication, Guangzhou 510632, China"}]},{"given":"Xuan","family":"He","sequence":"additional","affiliation":[{"name":"Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China"},{"name":"Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Guangzhou 510632, China"},{"name":"Guangdong Provincial Engineering Technology Research Center on Visible Light Communication, Guangzhou 510632, China"},{"name":"Guangzhou Municipal Key Laboratory of Engineering Technology on Visible Light Communication, Guangzhou 510632, China"}]},{"given":"You","family":"Jiang","sequence":"additional","affiliation":[{"name":"Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China"},{"name":"Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Guangzhou 510632, China"},{"name":"Guangdong Provincial Engineering Technology Research Center on Visible Light Communication, Guangzhou 510632, China"},{"name":"Guangzhou Municipal Key Laboratory of Engineering Technology on Visible Light Communication, Guangzhou 510632, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5002-3708","authenticated-orcid":false,"given":"Mini Han","family":"Wang","sequence":"additional","affiliation":[{"name":"Faculty of Medicine, Chinese University of Hong Kong, Hong Kong 999077, China"}]}],"member":"1968","published-online":{"date-parts":[[2026,1,21]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Lin, X., Yu, X., Fu, Y., Chen, X., Yang, W., and Dai, Q. (2022). Meibomian gland density: An effective evaluation index of meibomian gland dysfunction based on deep learning and transfer learning. J. Clin. Med., 11.","DOI":"10.3390\/jcm11092396"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"657","DOI":"10.1016\/j.jtos.2020.06.009","article-title":"Association of meibomian gland architecture and body mass index in a pediatric population","volume":"18","author":"Gupta","year":"2020","journal-title":"Ocul. Surf."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"746","DOI":"10.1136\/bjophthalmol-2012-303014","article-title":"Objective image analysis of the meibomian gland area","volume":"98","author":"Arita","year":"2014","journal-title":"Br. J. Ophthalmol."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"38","DOI":"10.1097\/00003226-199801000-00007","article-title":"Evaluation of subjective assessments and objective diagnostic tests for diagnosing tear-film disorders known to cause ocular irritation","volume":"17","author":"Pflugfelder","year":"1998","journal-title":"Cornea"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"22","DOI":"10.1016\/j.clae.2012.10.074","article-title":"Comparison of subjective grading and objective assessment in meibography","volume":"36","author":"Pult","year":"2013","journal-title":"Contact Lens Anterior Eye"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"37","DOI":"10.1167\/tvst.8.6.37","article-title":"A deep learning approach for meibomian gland atrophy evaluation in meibography images","volume":"8","author":"Wang","year":"2019","journal-title":"Transl. Vis. Sci. Technol."},{"key":"ref_7","unstructured":"Llorens-Quintana, C., Syga, P., and Iskander, D.R. (2018, January 25\u201328). Automated image processing algorithm for infrared meibography. Proceedings of the Imaging Systems and Applications, Optica Publishing Group, Orlando, FL, USA."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"17","DOI":"10.1167\/tvst.8.4.17","article-title":"A novel automated approach for infrared-based assessment of meibomian gland morphology","volume":"8","author":"Syga","year":"2019","journal-title":"Transl. Vis. Sci. Technol."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"086008","DOI":"10.1117\/1.JBO.17.8.086008","article-title":"Detection of meibomian glands and classification of meibography images","volume":"17","author":"Koh","year":"2012","journal-title":"J. Biomed. Opt."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"865","DOI":"10.1016\/j.jtos.2020.09.005","article-title":"2D fourier transform for global analysis and classification of meibomian gland images","volume":"18","author":"Pochylski","year":"2020","journal-title":"Ocul. Surf."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"283","DOI":"10.1016\/j.jtos.2022.06.006","article-title":"Automated quantification of meibomian gland dropout in infrared meibography using deep learning","volume":"26","author":"Saha","year":"2022","journal-title":"Ocul. Surf."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"106742","DOI":"10.1016\/j.cmpb.2022.106742","article-title":"Health classification of Meibomian gland images using keratography 5M based on AlexNet model","volume":"219","author":"Luo","year":"2022","journal-title":"Comput. Methods Programs Biomed."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"101776","DOI":"10.1016\/j.bspc.2019.101776","article-title":"Deep learning segmentation and quantification of Meibomian glands","volume":"57","author":"Prabhu","year":"2020","journal-title":"Biomed. Signal Process. Control"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"7649","DOI":"10.1038\/s41598-021-87314-8","article-title":"Deep learning-based automatic meibomian gland segmentation and morphology assessment in infrared meibography","volume":"11","author":"Setu","year":"2021","journal-title":"Sci. Rep."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1615","DOI":"10.1002\/mp.17542","article-title":"Strip and boundary detection multi-task learning network for segmentation of meibomian glands","volume":"52","author":"Zhu","year":"2025","journal-title":"Med. Phys."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"012025","DOI":"10.1088\/1742-6596\/2650\/1\/012025","article-title":"Can explainable artificial intelligence optimize the data quality of machine learning model? Taking Meibomian gland dysfunction detections as a case study","volume":"2650","author":"Wang","year":"2023","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1167\/tvst.10.2.4","article-title":"Meibography phenotyping and classification from unsupervised discriminative feature learning","volume":"10","author":"Yeh","year":"2021","journal-title":"Transl. Vis. Sci. Technol."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1134","DOI":"10.1145\/1968.1972","article-title":"A theory of the learnable","volume":"27","author":"Valiant","year":"1984","journal-title":"Commun. ACM"},{"key":"ref_19","unstructured":"Qui\u00f1onero-Candela, J., Sugiyama, M., Schwaighofer, A., and Lawrence, N.D. (2022). Dataset Shift in Machine Learning, MIT Press."},{"key":"ref_20","unstructured":"Ben-David, S., Blitzer, J., Crammer, K., and Pereira, F. (2006, January 4\u20137). Analysis of representations for domain adaptation. Proceedings of the NIPS\u201906: Proceedings of the 20th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1173","DOI":"10.1109\/TBME.2021.3117407","article-title":"Domain adaptation for medical image analysis: A survey","volume":"69","author":"Guan","year":"2021","journal-title":"IEEE Trans. Biomed. Eng."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"iv35","DOI":"10.1093\/noajnl\/vdaa092","article-title":"Deep learning for medical image analysis: A brief introduction","volume":"2","author":"Wiestler","year":"2020","journal-title":"Neuro-Oncol. Adv."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"197","DOI":"10.1016\/j.media.2019.01.012","article-title":"Attention gated networks: Learning to leverage salient regions in medical images","volume":"53","author":"Schlemper","year":"2019","journal-title":"Med. Image Anal."},{"key":"ref_24","unstructured":"Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., and Kainz, B. (2018). Attention u-net: Learning where to look for the pancreas. arXiv."},{"key":"ref_25","unstructured":"Ganin, Y., and Lempitsky, V. (2015, January 6\u201311). Unsupervised domain adaptation by backpropagation. Proceedings of the International Conference on Machine Learning, PMLR, Lille, France."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"102685","DOI":"10.1016\/j.media.2022.102685","article-title":"One model is all you need: Multi-task learning enables simultaneous histology image segmentation and classification","volume":"83","author":"Graham","year":"2023","journal-title":"Med. Image Anal."},{"key":"ref_27","unstructured":"Kendall, A., Gal, Y., and Cipolla, R. (June, January 18\u2013). Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"1199","DOI":"10.1109\/JSTSP.2020.3005317","article-title":"Unsupervised mitochondria segmentation in EM images via domain adaptive multi-task learning","volume":"14","author":"Peng","year":"2020","journal-title":"IEEE J. Sel. Top. Signal Process."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Woo, S., Park, J., Lee, J.Y., and Kweon, I.S. (2018, January 8\u201314). Cbam: Convolutional block attention module. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. Proceedings of the International Conference on Medical IMAGE Computing and Computer-Assisted Intervention, Springer.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_32","unstructured":"Ioffe, S., and Szegedy, C. (2015, January 6\u201311). Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the International Conference on Machine Learning, PMLR, Lille, France."},{"key":"ref_33","first-page":"1929","article-title":"Dropout: A simple way to prevent neural networks from overfitting","volume":"15","author":"Srivastava","year":"2014","journal-title":"J. Mach. Learn. Res."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Ren, J., Hacihaliloglu, I., Singer, E.A., Foran, D.J., and Qi, X. (2018). Adversarial domain adaptation for classification of prostate histopathology whole-slide images. Proceedings of the International Conference on Medical Image computing and Computer-Assisted Intervention, Springer.","DOI":"10.1007\/978-3-030-00934-2_23"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Zhang, J., Liu, M., Pan, Y., and Shen, D. (2019). Unsupervised conditional consensus adversarial network for brain disease identification with structural MRI. Proceedings of the International Workshop on Machine Learning in Medical Imaging, Springer.","DOI":"10.1007\/978-3-030-32692-0_45"},{"key":"ref_36","unstructured":"Long, M., Cao, Z., Wang, J., and Jordan, M.I. (2018, January 3\u20138). Conditional adversarial domain adaptation. Proceedings of the NIPS\u201918: Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"2300","DOI":"10.1177\/1120672120969035","article-title":"Ocular surface analysis: A comparison between the LipiView\u00ae II and IDRA\u00ae","volume":"31","author":"Lee","year":"2021","journal-title":"Eur. J. Ophthalmol."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"155","DOI":"10.1080\/02713683.2017.1393092","article-title":"Imaging the tear film: A comparison between the subjective keeler tearscope-plus\u2122 and the objective oculus\u00ae keratograph 5M and LipiView\u00ae interferometer","volume":"43","author":"Markoulli","year":"2018","journal-title":"Curr. Eye Res."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"40","DOI":"10.3758\/BF03213026","article-title":"Theoretical analysis of an alphabetic confusion matrix","volume":"9","author":"Townsend","year":"1971","journal-title":"Percept. Psychophys."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"4023","DOI":"10.53555\/AJBR.v27i4S.4345","article-title":"Confusion matrix-based performance evaluation metrics","volume":"27","author":"Sathyanarayanan","year":"2024","journal-title":"Afr. J. Biomed. Res."},{"key":"ref_41","unstructured":"Powers, D.M. (2020). Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Chicco, D., and Jurman, G. (2020). The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom., 21.","DOI":"10.1186\/s12864-019-6413-7"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"297","DOI":"10.2307\/1932409","article-title":"Measures of the amount of ecologic association between species","volume":"26","author":"Dice","year":"1945","journal-title":"Ecology"},{"key":"ref_44","unstructured":"Long, M., Cao, Y., Wang, J., and Jordan, M. (2015, January 6\u201311). Learning transferable features with deep adaptation networks. Proceedings of the International Conference on Machine Learning, PMLR, Lille, France."},{"key":"ref_45","unstructured":"Long, M., Zhu, H., Wang, J., and Jordan, M.I. (2017, January 6\u201311). Deep transfer learning with joint adaptation networks. Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia."},{"key":"ref_46","unstructured":"Zhang, Y., Liu, T., Long, M., and Jordan, M. (2019, January 9\u201315). Bridging theory and algorithm for domain adaptation. Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA."},{"key":"ref_47","unstructured":"Chen, X., Wang, S., Long, M., and Wang, J. (2019, January 9\u201315). Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Jin, Y., Wang, X., Long, M., and Wang, J. (2020, January 23\u201328). Minimum class confusion for versatile domain adaptation. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58589-1_28"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017, January 22\u201329). Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.74"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Varoquaux, G., and Colliot, O. (2023). Evaluating machine learning models and their diagnostic value. Machine Learning for Brain Disorders, Springer.","DOI":"10.1007\/978-1-0716-3195-9_20"},{"key":"ref_51","unstructured":"Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., and Darrell, T. (2014). Deep domain confusion: Maximizing for domain invariance. arXiv."}],"container-title":["Journal of Imaging"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2313-433X\/12\/1\/50\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,1,21]],"date-time":"2026-01-21T14:32:00Z","timestamp":1769005920000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2313-433X\/12\/1\/50"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,1,21]]},"references-count":51,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2026,1]]}},"alternative-id":["jimaging12010050"],"URL":"https:\/\/doi.org\/10.3390\/jimaging12010050","relation":{},"ISSN":["2313-433X"],"issn-type":[{"value":"2313-433X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,1,21]]}}}