{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,17]],"date-time":"2026-02-17T03:39:00Z","timestamp":1771299540932,"version":"3.50.1"},"reference-count":82,"publisher":"Association for Computing Machinery (ACM)","issue":"7","license":[{"start":{"date-parts":[[2024,6,19]],"date-time":"2024-06-19T00:00:00Z","timestamp":1718755200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2022ZD0209103"],"award-info":[{"award-number":["2022ZD0209103"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2024,8,31]]},"abstract":"<jats:p>\n            Graph neural networks (GNNs) have shown great potential in representation learning for various graph tasks. However, the distribution shift between the training and test sets poses a challenge to the efficiency of GNNs. To address this challenge,\n            <jats:sc>HomoTTT<\/jats:sc>\n            \u00a0 proposes a fully test-time training framework for GNNs to enhance the model\u2019s generalization capabilities for node classification tasks. Specifically, our proposed\n            <jats:sc>HomoTTT<\/jats:sc>\n            \u00a0 designs a homophily-based and parameter-free graph contrastive learning task with adaptive augmentation to guide the model\u2019s adaptation during the test-time training, allowing the model to adapt for specific target data. In the inference stage,\n            <jats:sc>HomoTTT<\/jats:sc>\n            \u00a0 proposes to integrate the original GNN model and the adapted model after TTT using a homophily-based model selection method, which prevents potential performance degradation caused by unconstrained model adaptation. Extensive experimental results on six benchmark datasets demonstrate the effectiveness of our proposed framework. Additionally, the exploratory study further validates the rationality of the homophily-based graph contrastive learning task with adaptive augmentation and the homophily-based model selection designed in\u00a0\n            <jats:sc>HomoTTT<\/jats:sc>\n            .\n          <\/jats:p>","DOI":"10.1145\/3649507","type":"journal-article","created":{"date-parts":[[2024,2,26]],"date-time":"2024-02-26T12:35:06Z","timestamp":1708950906000},"page":"1-19","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["A Fully Test-time Training Framework for Semi-supervised Node Classification on Out-of-Distribution Graphs"],"prefix":"10.1145","volume":"18","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1309-5865","authenticated-orcid":false,"given":"Jiaxin","family":"Zhang","sequence":"first","affiliation":[{"name":"National University of Defense Technology, Changsha, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9594-1919","authenticated-orcid":false,"given":"Yiqi","family":"Wang","sequence":"additional","affiliation":[{"name":"College of Computer Science and Technology, National University of Defense Technology, Changsha, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3260-869X","authenticated-orcid":false,"given":"Xihong","family":"Yang","sequence":"additional","affiliation":[{"name":"College of Computer Science and Technology, National University of Defense Technology, Changsha, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2305-7555","authenticated-orcid":false,"given":"En","family":"Zhu","sequence":"additional","affiliation":[{"name":"School of Computer Science, National University of Defense Technology, Changsha, China"}]}],"member":"320","published-online":{"date-parts":[[2024,6,19]]},"reference":[{"key":"e_1_3_1_2_2","first-page":"3080","volume-title":"Proceedings of the International Conference on Artificial Intelligence and Statistics","author":"Bartler Alexander","year":"2022","unstructured":"Alexander Bartler, Andre B\u00fchler, Felix Wiewel, Mario D\u00f6bler, and Bin Yang. 2022. Mt3: Meta test-time training for self-supervised test-time adaption. In Proceedings of the International Conference on Artificial Intelligence and Statistics. PMLR, 3080\u20133090."},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i05.6243"},{"key":"e_1_3_1_4_2","article-title":"GraphTTA: Test time adaptation on graph neural networks","author":"Chen Guanzi","year":"2022","unstructured":"Guanzi Chen, Jiying Zhang, Xi Xiao, and Yang Li. 2022. GraphTTA: Test time adaptation on graph neural networks. arXiv:2208.09126. Retrieved from https:\/\/arxiv.org\/abs\/2208.09126","journal-title":"arXiv:2208.09126"},{"key":"e_1_3_1_5_2","article-title":"Invariance principle meets out-of-distribution generalization on graphs","author":"Chen Yongqiang","year":"2022","unstructured":"Yongqiang Chen, Yonggang Zhang, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, and James Cheng. 2022. Invariance principle meets out-of-distribution generalization on graphs. arXiv:2202.05441. Retrieved from https:\/\/arxiv.org\/abs\/2202.05441","journal-title":"arXiv:2202.05441"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/2783258.2788606"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313488"},{"key":"e_1_3_1_8_2","article-title":"Inductive representation learning on large graphs","volume":"30","author":"Hamilton Will","year":"2017","unstructured":"Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, Vol. 30 (2017).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_9_2","first-page":"8230","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Han Xiaotian","year":"2022","unstructured":"Xiaotian Han, Zhimeng Jiang, Ninghao Liu, and Xia Hu. 2022. G-mixup: Graph data augmentation for graph classification. In Proceedings of the International Conference on Machine Learning. PMLR, 8230\u20138248."},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401063"},{"key":"e_1_3_1_11_2","article-title":"Benchmarking neural network robustness to common corruptions and perturbations","author":"Hendrycks Dan","year":"2019","unstructured":"Dan Hendrycks and Thomas Dietterich. 2019. Benchmarking neural network robustness to common corruptions and perturbations. arXiv:1903.12261. Retrieved from https:\/\/arxiv.org\/abs\/1903.12261","journal-title":"arXiv:1903.12261"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.3016780"},{"key":"e_1_3_1_13_2","first-page":"22118","article-title":"Open graph benchmark: Datasets for machine learning on graphs","volume":"33","author":"Hu Weihua","year":"2020","unstructured":"Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. In Advances in Neural Information Processing Systems, Vol. 33 (2020), 22118\u201322133.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_14_2","article-title":"Multi-view fuzzy classification with subspace clustering and information granules","author":"Hu Xingchen","year":"2022","unstructured":"Xingchen Hu, Xinwang Liu, Witold Pedrycz, Qing Liao, Yinghua Shen, Yan Li, and Siwei Wang. 2022. Multi-view fuzzy classification with subspace clustering and information granules. IEEE Trans. Knowl. Data Eng. (2022).","journal-title":"IEEE Trans. Knowl. Data Eng."},{"issue":"7","key":"e_1_3_1_15_2","first-page":"6406","article-title":"Identification of fuzzy rule-based models with collaborative fuzzy clustering","volume":"52","author":"Hu Xingchen","year":"2021","unstructured":"Xingchen Hu, Yinghua Shen, Witold Pedrycz, Xianmin Wang, Adam Gacek, and Bingsheng Liu. 2021. Identification of fuzzy rule-based models with collaborative fuzzy clustering. IEEE Trans. Cybernet. 52, 7 (2021), 6406\u20136419.","journal-title":"IEEE Trans. Cybernet."},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403237"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/3543507.3583335"},{"key":"e_1_3_1_18_2","article-title":"Revisiting the role of heterophily in graph representation learning: An edge classification perspective","author":"Huang Jincheng","year":"2023","unstructured":"Jincheng Huang, Ping Li, Rui Huang, Na Chen, and Acong Zhang. 2023. Revisiting the role of heterophily in graph representation learning: An edge classification perspective. ACM Trans. Knowl. Discov. Data (2023).","journal-title":"ACM Trans. Knowl. Discov. Data"},{"key":"e_1_3_1_19_2","first-page":"1","volume-title":"Proceedings of the International Joint Conference on Neural Networks (IJCNN\u201922)","author":"Huang Jincheng","year":"2022","unstructured":"Jincheng Huang, Pin Li, and Kai Zhang. 2022. Semantic consistency for graph representation learning. In Proceedings of the International Joint Conference on Neural Networks (IJCNN\u201922). IEEE, 1\u20138."},{"key":"e_1_3_1_20_2","article-title":"Text level graph neural network for text classification","author":"Huang Lianzhe","year":"2019","unstructured":"Lianzhe Huang, Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2019. Text level graph neural network for text classification. arXiv:1910.02356. Retrieved from https:\/\/arxiv.org\/abs\/1910.02356","journal-title":"arXiv:1910.02356"},{"key":"e_1_3_1_21_2","first-page":"9377","volume-title":"International Conference on Machine Learning","author":"Huang Zhongyu","year":"2022","unstructured":"Zhongyu Huang, Yingheng Wang, Chaozhuo Li, and Huiguang He. 2022. Going deeper into permutation-sensitive graph neural networks. In International Conference on Machine Learning. PMLR, 9377\u20139409."},{"key":"e_1_3_1_22_2","article-title":"Automated self-supervised learning for graphs","author":"Jin Wei","year":"2021","unstructured":"Wei Jin, Xiaorui Liu, Xiangyu Zhao, Yao Ma, Neil Shah, and Jiliang Tang. 2021. Automated self-supervised learning for graphs. arXiv:2106.05470. Retrieved from https:\/\/arxiv.org.abs\/2106.05470","journal-title":"arXiv:2106.05470"},{"key":"e_1_3_1_23_2","article-title":"Empowering graph representation learning with test-time graph transformation","author":"Jin Wei","year":"2022","unstructured":"Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, and Neil Shah. 2022. Empowering graph representation learning with test-time graph transformation. arXiv:2210.03561. Retrieved from https:\/\/arxiv.org\/abs\/2210.03561","journal-title":"arXiv:2210.03561"},{"key":"e_1_3_1_24_2","article-title":"Semi-supervised classification with graph convolutional networks","author":"Kipf Thomas N.","year":"2016","unstructured":"Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv:1609.02907. Retrieved from https:\/\/arxiv.org\/abs\/1609.02907","journal-title":"arXiv:1609.02907"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00016"},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.1145\/3404835.3462926"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2022.3193725"},{"key":"e_1_3_1_28_2","first-page":"11828","article-title":"Learning invariant graph representations for out-of-distribution generalization","volume":"35","author":"Li Haoyang","year":"2022","unstructured":"Haoyang Li, Ziwei Zhang, Xin Wang, and Wenwu Zhu. 2022. Learning invariant graph representations for out-of-distribution generalization. In Advances in Neural Information Processing Systems, Vol. 35, 11828\u201311841.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_29_2","article-title":"HomoGCL: Rethinking homophily in graph contrastive learning","author":"Li Wen-Zhi","year":"2023","unstructured":"Wen-Zhi Li, Chang-Dong Wang, Hui Xiong, and Jian-Huang Lai. 2023. HomoGCL: Rethinking homophily in graph contrastive learning. arXiv:2306.09614. Retrieved from https:\/\/arxiv.org\/abs\/2306.09614","journal-title":"arXiv:2306.09614"},{"key":"e_1_3_1_30_2","article-title":"New benchmarks for learning on non-homophilous graphs","author":"Lim Derek","year":"2021","unstructured":"Derek Lim, Xiuyu Li, Felix Hohne, and Ser-Nam Lim. 2021. New benchmarks for learning on non-homophilous graphs. arXiv:2104.01404. Retrieved from https:\/\/arxiv.org\/abs\/2104.01404","journal-title":"arXiv:2104.01404"},{"key":"e_1_3_1_31_2","article-title":"Self-supervised temporal graph learning with temporal and structural intensity alignment","author":"Liu Meng","year":"2023","unstructured":"Meng Liu, Ke Liang, Yawei Zhao, Wenxuan Tu, Sihang Zhou, Xinwang Liu, and Kunlun He. 2023. Self-supervised temporal graph learning with temporal and structural intensity alignment. arXiv:2302.07491. Retrieved from https:\/\/arxiv.org\/abs\/2302.07491","journal-title":"arXiv:2302.07491"},{"key":"e_1_3_1_32_2","first-page":"21808","article-title":"TTT++: When does self-supervised test-time training fail or thrive?","volume":"34","author":"Liu Yuejiang","year":"2021","unstructured":"Yuejiang Liu, Parth Kothari, Bastien Van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, and Alexandre Alahi. 2021. TTT++: When does self-supervised test-time training fail or thrive? In Advances in Neural Information Processing Systems, Vol. 34, 21808\u201321820.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_33_2","article-title":"Simple contrastive graph clustering","author":"Liu Yue","year":"2023","unstructured":"Yue Liu, Xihong Yang, Sihang Zhou, Xinwang Liu, Siwei Wang, Ke Liang, Wenxuan Tu, and Liang Li. 2023. Simple contrastive graph clustering. IEEE Trans. Neural Netw. Learn. Syst. (2023).","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330982"},{"key":"e_1_3_1_35_2","first-page":"466","volume-title":"European Conference on Computer Vision","author":"Mancini Massimiliano","year":"2020","unstructured":"Massimiliano Mancini, Zeynep Akata, Elisa Ricci, and Barbara Caputo. 2020. Towards recognizing unseen categories in unseen domains. In European Conference on Computer Vision. Springer, 466\u2013483."},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2023.3268069"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.5555\/3618408.3619448"},{"key":"e_1_3_1_38_2","first-page":"7797","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence (AAAI\u201922)","author":"Mo Yujie","year":"2022","unstructured":"Yujie Mo, Liang Peng, Jie Xu, Xiaoshuang Shi, and Xiaofeng Zhu. 2022. Simple unsupervised graph representation learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI\u201922). 7797\u20137805."},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i04.5984"},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/2623330.2623732"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.7551\/mitpress\/9780262170055.001.0001"},{"key":"e_1_3_1_42_2","article-title":"Dropedge: Towards deep graph convolutional networks on node classification","author":"Rong Yu","year":"2019","unstructured":"Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. 2019. Dropedge: Towards deep graph convolutional networks on node classification. arXiv:1907.10903. Retrieved from https:\/\/arxiv.org\/abs\/1907.10903","journal-title":"arXiv:1907.10903"},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1145\/3424672"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1093\/comnet\/cnab014"},{"key":"e_1_3_1_45_2","article-title":"Pitfalls of graph neural network evaluation","author":"Shchur Oleksandr","year":"2018","unstructured":"Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan G\u00fcnnemann. 2018. Pitfalls of graph neural network evaluation. arXiv:1811.05868. Retrieved from https:\/\/arxiv.org\/abs\/1811.05868","journal-title":"arXiv:1811.05868"},{"key":"e_1_3_1_46_2","article-title":"Towards out-of-distribution generalization: A survey","author":"Shen Zheyan","year":"2021","unstructured":"Zheyan Shen, Jiashuo Liu, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, and Peng Cui. 2021. Towards out-of-distribution generalization: A survey. arXiv:2108.13624. Retrieved from https:\/\/arxiv.org\/abs\/2108.13624","journal-title":"arXiv:2108.13624"},{"key":"e_1_3_1_47_2","article-title":"Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization","author":"Sun Fan-Yun","year":"2019","unstructured":"Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. 2019. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv:1908.01000. Retrieved from https:\/\/arxiv.org\/abs\/1908.01000","journal-title":"arXiv:1908.01000"},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i04.6048"},{"key":"e_1_3_1_49_2","unstructured":"Yu Sun Xiaolong Wang Zhuang Liu John Miller Alexei A. Efros and Moritz Hardt. 2019. Test-time training for out-of-distribution generalization."},{"key":"e_1_3_1_50_2","first-page":"15920","article-title":"Adversarial graph augmentation to improve graph contrastive learning","volume":"34","author":"Suresh Susheel","year":"2021","unstructured":"Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. 2021. Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, Vol. 34, 15920\u201315933.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","DOI":"10.3389\/fdata.2019.00002"},{"key":"e_1_3_1_52_2","first-page":"20341","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Tomar Devavrat","year":"2023","unstructured":"Devavrat Tomar, Guillaume Vray, Behzad Bozorgtabar, and Jean-Philippe Thiran. 2023. TeSLA: Test-time self-learning with automatic adversarial augmentation. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 20341\u201320350."},{"key":"e_1_3_1_53_2","article-title":"Graph attention networks","author":"Veli\u010dkovi\u0107 Petar","year":"2017","unstructured":"Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv:1710.10903. Retrieved from https:\/\/arxiv.org\/abs\/1710.10903","journal-title":"arXiv:1710.10903"},{"issue":"3","key":"e_1_3_1_54_2","first-page":"4","article-title":"Deep graph infomax.","volume":"2","author":"Velickovic Petar","year":"2019","unstructured":"Petar Velickovic, William Fedus, William L. Hamilton, Pietro Li\u00f2, Yoshua Bengio, and R. Devon Hjelm. 2019. Deep graph infomax. ICLR (Poster) 2, 3 (2019), 4.","journal-title":"ICLR (Poster)"},{"key":"e_1_3_1_55_2","article-title":"Tent: Fully test-time adaptation by entropy minimization","author":"Wang Dequan","year":"2020","unstructured":"Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. 2020. Tent: Fully test-time adaptation by entropy minimization. arXiv:2006.10726. Retrieved from https:\/\/arxiv.org\/abs\/2006.10726","journal-title":"arXiv:2006.10726"},{"key":"e_1_3_1_56_2","article-title":"Generalizing to unseen domains: A survey on domain generalization","author":"Wang Jindong","year":"2022","unstructured":"Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. 2022. Generalizing to unseen domains: A survey on domain generalization. IEEE Trans. Knowl. Data Eng. (2022).","journal-title":"IEEE Trans. Knowl. Data Eng."},{"key":"e_1_3_1_57_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2017.2754499"},{"key":"e_1_3_1_58_2","article-title":"Test-time training for graph neural networks","author":"Wang Yiqi","year":"2022","unstructured":"Yiqi Wang, Chaozhuo Li, Wei Jin, Rui Li, Jianan Zhao, Jiliang Tang, and Xing Xie. 2022. Test-time training for graph neural networks. arXiv:2210.08813. Retrieved from https:\/\/arxiv.org\/abs\/2210.08813","journal-title":"arXiv:2210.08813"},{"key":"e_1_3_1_59_2","first-page":"540","volume-title":"Proceedings of the SIAM International Conference on Data Mining (SDM\u201922)","author":"Wang Yiqi","year":"2022","unstructured":"Yiqi Wang, Chaozhuo Li, Mingzheng Li, Wei Jin, Yuming Liu, Hao Sun, Xing Xie, and Jiliang Tang. 2022. Localized graph collaborative filtering. In Proceedings of the SIAM International Conference on Data Mining (SDM\u201922). SIAM, 540\u2013548."},{"key":"e_1_3_1_60_2","first-page":"11815","article-title":"Knowledge distillation improves graph structure augmentation for graph neural networks","volume":"35","author":"Wu Lirong","year":"2022","unstructured":"Lirong Wu, Haitao Lin, Yufei Huang, and Stan Z. Li. 2022. Knowledge distillation improves graph structure augmentation for graph neural networks. In Advances in Neural Information Processing Systems, Vol. 35, 11815\u201311827.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_61_2","article-title":"Energy-based out-of-distribution detection for graph neural networks","author":"Wu Qitian","year":"2023","unstructured":"Qitian Wu, Yiting Chen, Chenxiao Yang, and Junchi Yan. 2023. Energy-based out-of-distribution detection for graph neural networks. arXiv:2302.02914. Retrieved from https:\/\/arxiv.org\/abs\/2302.02914","journal-title":"arXiv:2302.02914"},{"key":"e_1_3_1_62_2","article-title":"Handling distribution shifts on graphs: An invariance perspective","author":"Wu Qitian","year":"2022","unstructured":"Qitian Wu, Hengrui Zhang, Junchi Yan, and David Wipf. 2022. Handling distribution shifts on graphs: An invariance perspective. arXiv:2202.02466. Retrieved from https:\/\/arxiv.org\/abs\/2202.02466","journal-title":"arXiv:2202.02466"},{"key":"e_1_3_1_63_2","article-title":"Topology attack and defense for graph neural networks: An optimization perspective","author":"Xu Kaidi","year":"2019","unstructured":"Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. 2019. Topology attack and defense for graph neural networks: An optimization perspective. arXiv:1906.04214. Retrieved from https:\/\/arxiv.org\/abs\/1906.04214","journal-title":"arXiv:1906.04214"},{"key":"e_1_3_1_64_2","article-title":"How powerful are graph neural networks?","author":"Xu Keyulu","year":"2018","unstructured":"Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv:1810.00826. Retrieved from https:\/\/arxiv.org\/abs\/1810.00826","journal-title":"arXiv:1810.00826"},{"key":"e_1_3_1_65_2","article-title":"Interpolation-based contrastive learning for few-label semi-supervised learning","author":"Yang Xihong","year":"2022","unstructured":"Xihong Yang, Xiaochang Hu, Sihang Zhou, Xinwang Liu, and En Zhu. 2022. Interpolation-based contrastive learning for few-label semi-supervised learning. IEEE Trans. Neural Netw. Learn. Syst. (2022).","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"e_1_3_1_66_2","article-title":"Mixed graph contrastive network for semi-supervised node classification","author":"Yang Xihong","year":"2022","unstructured":"Xihong Yang, Yue Liu, Sihang Zhou, Xinwang Liu, and En Zhu. 2022. Mixed graph contrastive network for semi-supervised node classification. arXiv:2206.02796. Retrieved from https:\/\/arxiv.org\/abs\/2206.02796","journal-title":"arXiv:2206.02796"},{"key":"e_1_3_1_67_2","article-title":"Contrastive deep graph clustering with learnable augmentation","author":"Yang Xihong","year":"2022","unstructured":"Xihong Yang, Yue Liu, Sihang Zhou, Siwei Wang, Xinwang Liu, and En Zhu. 2022. Contrastive deep graph clustering with learnable augmentation. arXiv:2212.03559. Retrieved from https:\/\/arxiv.org\/abs\/2212.03559","journal-title":"arXiv:2212.03559"},{"key":"e_1_3_1_68_2","first-page":"10834","volume-title":"Proceedings of the AAAI conference on artificial intelligence","volume":"37","author":"Yang Xihong","year":"2023","unstructured":"Xihong Yang, Yue Liu, Sihang Zhou, Siwei Wang, Wenxuan Tu, Qun Zheng, Xinwang Liu, Liming Fang, and En Zhu. 2023. Cluster-guided contrastive graph clustering network. In Proceedings of the AAAI conference on artificial intelligence, Vol. 37. 10834\u201310842."},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1145\/3581783.3611809"},{"key":"e_1_3_1_70_2","first-page":"40","volume-title":"International Conference on Machine Learning","author":"Yang Zhilin","year":"2016","unstructured":"Zhilin Yang, William Cohen, and Ruslan Salakhudinov. 2016. Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning. PMLR, 40\u201348."},{"key":"e_1_3_1_71_2","article-title":"Hierarchical graph representation learning with differentiable pooling","volume":"31","author":"Ying Zhitao","year":"2018","unstructured":"Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. 2018. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems, Vol. 31.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_72_2","first-page":"5812","article-title":"Graph contrastive learning with augmentations","volume":"33","author":"You Yuning","year":"2020","unstructured":"Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. In Advances in Neural Information Processing Systems, Vol. 33, 5812\u20135823.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_73_2","first-page":"10871","volume-title":"Proceedings of the International Conference on Machine Learning","author":"You Yuning","year":"2020","unstructured":"Yuning You, Tianlong Chen, Zhangyang Wang, and Yang Shen. 2020. When does self-supervision help graph convolutional networks? In Proceedings of the International Conference on Machine Learning. PMLR, 10871\u201310880."},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00391"},{"key":"e_1_3_1_75_2","article-title":"Mixup: Beyond empirical risk minimization","author":"Zhang Hongyi","year":"2017","unstructured":"Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2017. Mixup: Beyond empirical risk minimization. arXiv:1710.09412. Retrieved from https:\/\/arxiv.org\/abs\/1710.09412","journal-title":"arXiv:1710.09412"},{"key":"e_1_3_1_76_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Zhang Xinyi","year":"2018","unstructured":"Xinyi Zhang and Pietro Lio. 2018. Skip-gram based convolutional networks for graph classification. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_3_1_77_2","article-title":"Learning subpocket prototypes for generalizable structure-based drug design","author":"Zhang Zaixi","year":"2023","unstructured":"Zaixi Zhang and Qi Liu. 2023. Learning subpocket prototypes for generalizable structure-based drug design. Proceedings of the International Conference on Machine Learning (ICML\u201923).","journal-title":"Proceedings of the International Conference on Machine Learning (ICML\u201923)"},{"key":"e_1_3_1_78_2","article-title":"Full-atom protein pocket design via iterative refinement","author":"Zhang Zaixi","year":"2023","unstructured":"Zaixi Zhang, Zepu Lu, Zhongkai Hao, Marinka Zitnik, and Qi Liu. 2023. Full-atom protein pocket design via iterative refinement. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS\u201923).","journal-title":"Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS\u201923)"},{"key":"e_1_3_1_79_2","first-page":"26911","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Zhao Tong","year":"2022","unstructured":"Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, and Meng Jiang. 2022. Learning from counterfactual links for link prediction. In Proceedings of the International Conference on Machine Learning. PMLR, 26911\u201326926."},{"key":"e_1_3_1_80_2","article-title":"Domain generalization: A survey","author":"Zhou Kaiyang","year":"2022","unstructured":"Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. 2022. Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell. (2022).","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"e_1_3_1_81_2","first-page":"27965","article-title":"Shift-robust gnns: Overcoming the limitations of localized graph training data","volume":"34","author":"Zhu Qi","year":"2021","unstructured":"Qi Zhu, Natalia Ponomareva, Jiawei Han, and Bryan Perozzi. 2021. Shift-robust gnns: Overcoming the limitations of localized graph training data. In Advances in Neural Information Processing Systems, Vol 34, 27965\u201327977.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_82_2","article-title":"Deep graph contrastive representation learning","author":"Zhu Yanqiao","year":"2020","unstructured":"Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2020. Deep graph contrastive representation learning. arXiv:2006.04131. Retrieved from https:\/\/arxiv.org\/abs\/2006.04131","journal-title":"arXiv:2006.04131"},{"key":"e_1_3_1_83_2","doi-asserted-by":"publisher","DOI":"10.1145\/3442381.3449802"}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3649507","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3649507","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T23:56:53Z","timestamp":1750291013000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3649507"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,19]]},"references-count":82,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2024,8,31]]}},"alternative-id":["10.1145\/3649507"],"URL":"https:\/\/doi.org\/10.1145\/3649507","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"value":"1556-4681","type":"print"},{"value":"1556-472X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,19]]},"assertion":[{"value":"2023-09-15","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-01-21","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-19","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}