{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T15:03:08Z","timestamp":1775228588154,"version":"3.50.1"},"reference-count":45,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2022,3,4]],"date-time":"2022-03-04T00:00:00Z","timestamp":1646352000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61671480, 61836002, 62020106007"],"award-info":[{"award-number":["61671480, 61836002, 62020106007"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Major Scientific and Technological Projects of CNPC","award":["ZD2019-183-008"],"award-info":[{"award-number":["ZD2019-183-008"]}]},{"name":"Open Project Program of the National Laboratory of Pattern Recognition","award":["202000009"],"award-info":[{"award-number":["202000009"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2022,8,31]]},"abstract":"<jats:p>\n            Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain. Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain. However, these algorithms will be infeasible when only a few labeled data exist in the source domain, thus the performance decreases significantly. To address this challenge, we propose a\n            <jats:bold>Domain-invariant Graph Learning (DGL)<\/jats:bold>\n            approach for domain adaptation with only a few labeled source samples. Firstly, DGL introduces the Nystr\u00f6m method to construct a plastic graph that shares similar geometric property with the target domain. Then, DGL flexibly employs the Nystr\u00f6m approximation error to measure the divergence between the plastic graph and source graph to formalize the distribution mismatch from the geometric perspective. Through minimizing the approximation error, DGL learns a domain-invariant geometric graph to bridge the source and target domains. Finally, we integrate the learned domain-invariant graph with the semi-supervised learning and further propose an adaptive semi-supervised model to handle the cross-domain problems. The results of extensive experiments on popular datasets verify the superiority of DGL, especially when only a few labeled source samples are available.\n          <\/jats:p>","DOI":"10.1145\/3487194","type":"journal-article","created":{"date-parts":[[2022,3,4]],"date-time":"2022-03-04T10:26:32Z","timestamp":1646389592000},"page":"1-18","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["Domain-invariant Graph for Adaptive Semi-supervised Domain Adaptation"],"prefix":"10.1145","volume":"18","author":[{"given":"Jinfeng","family":"Li","sequence":"first","affiliation":[{"name":"China University of Petroleum (East China), The State KeyLaboratory of Integrated Services Networks, Xidian University, Huangdao District, QingDao, China"}]},{"given":"Weifeng","family":"Liu","sequence":"additional","affiliation":[{"name":"China University of Petroleum (East China), The State KeyLaboratory of Integrated Services Networks, Xidian University, Huangdao District, QingDao, China"}]},{"given":"Yicong","family":"Zhou","sequence":"additional","affiliation":[{"name":"University of Macau, Hangzhou, China"}]},{"given":"Jun","family":"Yu","sequence":"additional","affiliation":[{"name":"Hangzhou Dianzi University, Hangzhou, China"}]},{"given":"Dapeng","family":"Tao","sequence":"additional","affiliation":[{"name":"Yunnan University, Kunming, Yunnan, China"}]},{"given":"Changsheng","family":"Xu","sequence":"additional","affiliation":[{"name":"Institute of Automation, Chinese Academy of Sciences, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2022,3,4]]},"reference":[{"key":"e_1_3_1_2_2","volume-title":"Interior-point Methods for Large-scale Cone Programming","author":"Andersen M.","year":"2011","unstructured":"M. Andersen, J. Dahl, Z. Liu, and L. Vandenberghe. 2011. Interior-point Methods for Large-scale Cone Programming."},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.5555\/1248547.1248632"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/VCIP.2016.7805516"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2016.2547397"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2016.2615921"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/1273496.1273521"},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01424-7_34"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2016.2631887"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.5555\/1046920.1194916"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2004.1262185"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/1401890.1401928"},{"key":"e_1_3_1_13_2","first-page":"2066","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Gong B.","year":"2012","unstructured":"B. Gong, Y. Shi, F. Sha, and K. Grauman. 2012. Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2066\u20132073."},{"key":"e_1_3_1_14_2","first-page":"513","volume-title":"Proceedings of the Advances in Neural Information Processing Systems (NIPS)","author":"Gretton A.","year":"2006","unstructured":"A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schlkopf, and A. J. Smola. 2006. A kernel method for the two-sample-problem. In Proceedings of the Advances in Neural Information Processing Systems (NIPS). 513\u2013520."},{"key":"e_1_3_1_15_2","article-title":"Caltech-256 object category dataset","author":"Griffin G.","year":"2007","unstructured":"G. Griffin, A. Holub, and P. Perona.2007. Caltech-256 object category dataset. CalTech Report (2007).","journal-title":"CalTech Report"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2019.2913379"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394171.3413701"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394171.3413986"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2014.2359798"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2017.7966016"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1145\/1961189.1961199"},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2018.2843342"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00737"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/tkde.2013.111"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.183"},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2014.2373376"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/tnn.2010.2091281"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2009.191"},{"key":"e_1_3_1_29_2","article-title":"Domain adaptation on graphs by learning aligned graph bases","author":"Pilanci M.","year":"2020","unstructured":"M. Pilanci and E. Vural. 2020. Domain adaptation on graphs by learning aligned graph bases. IEEE Transactions on Knowledge and Data Engineering (2020).","journal-title":"IEEE Transactions on Knowledge and Data Engineering"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2011.6126287"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM.2017.150"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1145\/3240508.3240512"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2018.2864732"},{"key":"e_1_3_1_34_2","first-page":"682","volume-title":"Proceedings of Advances in Neural Information Processing Systems (NIPS)","author":"Williams C.","year":"2000","unstructured":"C. Williams and M. Seeger. 2000. Using the Nystr \\( \\ddot{o} \\) m Method to speed up kernel machines. In Proceedings of Advances in Neural Information Processing Systems (NIPS). 682\u2013688."},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2013.167"},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCYB.2016.2633306"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/2700286"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298826"},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2013.2284755"},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2014.2311377"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.547"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.1145\/1390156.1390311"},{"key":"e_1_3_1_43_2","article-title":"Transfer adaptation learning: A decade survey","volume":"1903","author":"Zhang L.","year":"2019","unstructured":"L. Zhang. 2019. Transfer adaptation learning: A decade survey. Corr abs\/1903.04687 (2019). arxiv:1903.04687http:\/\/arxiv.org\/abs\/1903.04687","journal-title":"Corr"},{"key":"e_1_3_1_44_2","first-page":"1","article-title":"Guide subspace learning for unsupervised domain adaptation","author":"Zhang L.","year":"2019","unstructured":"L. Zhang, J. Fu, S. Wang, D. Zhang, Z. Dong, and C. L. P. Chen. 2019. Guide subspace learning for unsupervised domain adaptation. IEEE Transactions on Neural Networks and Learning Systems (2019), 1\u201315.","journal-title":"IEEE Transactions on Neural Networks and Learning Systems"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2018.8451245"},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2011.143"}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3487194","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3487194","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:18:47Z","timestamp":1750191527000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3487194"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,4]]},"references-count":45,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,8,31]]}},"alternative-id":["10.1145\/3487194"],"URL":"https:\/\/doi.org\/10.1145\/3487194","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"value":"1551-6857","type":"print"},{"value":"1551-6865","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,3,4]]},"assertion":[{"value":"2020-12-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-09-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-03-04","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}