{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T19:17:18Z","timestamp":1772824638470,"version":"3.50.1"},"reference-count":88,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2025,2,14]],"date-time":"2025-02-14T00:00:00Z","timestamp":1739491200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["92270114"],"award-info":[{"award-number":["92270114"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Ant Group through CCF-Ant Research Fund and CCF-AFSG Research Fund"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2025,2,28]]},"abstract":"<jats:p>\n            In graph classification, the out-of-distribution (OOD) issue is attracting great attention. To address this issue, a prevailing idea is to learn stable features, on the assumption that they are substructures causally determining the label and that their relationship with the label is stable to the distributional uncertainty. In contrast, the complementary parts termed environmental features, fail to determine the label solely and hold varying relationships with the label, thus ascribed to the possible reason for the distribution shift. Existing generalization efforts mainly encourage the model\u2019s insensitivity to environmental features. While the sensitivity to stable features is promising to distinguish the crucial clues from the distributional uncertainty but largely unexplored. A paradigm of simultaneously exploring the sensitivity to stable features and insensitivity to environmental features is until-now lacking to achieve the generalizable graph classification, to the best of our knowledge. In this work, we conjecture that generalizable models should be sensitive to stable features and insensitive to environmental features. To this end, we propose a simple yet effective augmentation strategy for graph classification: Equivariant and Invariant Cross-Data Augmentation (EI-CDA). By employing equivariance, given a pair of input graphs, we first estimate their stable and environmental features via masks. Then, we linearly mix the estimated stable features of two graphs and encourage the model predictions faithfully reflect their mixed semantics. Meanwhile, by using invariance, we swap the estimated environmental features of two graphs and keep the predictions invariant. This simple yet effective strategy endows the models with both sensitivity to stable features and insensitivity to environmental features. Extensive experiments show that EI-CDA significantly improves performance and outperforms leading baselines. Our codes are available at:\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/yongduosui\/EI-GNN\">https:\/\/github.com\/yongduosui\/EI-GNN<\/jats:ext-link>\n            .\n          <\/jats:p>","DOI":"10.1145\/3706062","type":"journal-article","created":{"date-parts":[[2024,11,28]],"date-time":"2024-11-28T18:18:13Z","timestamp":1732817893000},"page":"1-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["A Simple Data Augmentation for Graph Classification: A Perspective of Equivariance and Invariance"],"prefix":"10.1145","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4492-147X","authenticated-orcid":false,"given":"Yongduo","family":"Sui","sequence":"first","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-0439-4249","authenticated-orcid":false,"given":"Shuyao","family":"Wang","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-2202-3272","authenticated-orcid":false,"given":"Jie","family":"Sun","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1206-9315","authenticated-orcid":false,"given":"Zhiyuan","family":"Liu","sequence":"additional","affiliation":[{"name":"National University of Singapore, Singapore, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4909-4568","authenticated-orcid":false,"given":"Qing","family":"Cui","sequence":"additional","affiliation":[{"name":"Ant Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9263-7011","authenticated-orcid":false,"given":"Longfei","family":"Li","sequence":"additional","affiliation":[{"name":"Ant Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6033-6102","authenticated-orcid":false,"given":"Jun","family":"Zhou","sequence":"additional","affiliation":[{"name":"Ant Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6148-6329","authenticated-orcid":false,"given":"Xiang","family":"Wang","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8472-7992","authenticated-orcid":false,"given":"Xiangnan","family":"He","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]}],"member":"320","published-online":{"date-parts":[[2025,2,14]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2012.120"},{"key":"e_1_3_1_3_2","unstructured":"Martin Arjovsky L\u00e9on Bottou Ishaan Gulrajani and David Lopez-Paz. 2019. Invariant risk minimization. arXiv:1907.02893. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:1907.02893"},{"key":"e_1_3_1_4_2","article-title":"Expressive power of invariant and equivariant graph neural networks","author":"Azizian Wa\u00efss","year":"2021","unstructured":"Wa\u00efss Azizian and Marc Lelarge. 2021. Expressive power of invariant and equivariant graph neural networks. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_5_2","article-title":"How attentive are graph attention networks?","author":"Brody Shaked","year":"2022","unstructured":"Shaked Brody, Uri Alon, and Eran Yahav. 2022. How attentive are graph attention networks?. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_6_2","first-page":"1448","article-title":"Invariant rationalization","author":"Chang Shiyu","year":"2020","unstructured":"Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2020. Invariant rationalization. In ICML, 1448\u20131458.","journal-title":"ICML"},{"key":"e_1_3_1_7_2","first-page":"1695","volume-title":"ICML","author":"Chen Tianlong","year":"2021","unstructured":"Tianlong Chen, Yongduo Sui, Xuxi Chen, Aston Zhang, and Zhangyang Wang. 2021. A unified lottery ticket hypothesis for graph neural networks. In ICML. PMLR, 1695\u20131706."},{"key":"e_1_3_1_8_2","article-title":"GANs can play lottery tickets too","author":"Chen Xuxi","year":"2021","unstructured":"Xuxi Chen, Zhenyu Zhang, Yongduo Sui, and Tianlong Chen. 2021. GANs can play lottery tickets too. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_9_2","article-title":"Learning causally invariant representations for out-of-distribution generalization on graphs","author":"Chen Yongqiang","year":"2022","unstructured":"Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, and James Cheng. 2022. Learning causally invariant representations for out-of-distribution generalization on graphs. In NeurIPS.","journal-title":"NeurIPS"},{"key":"e_1_3_1_10_2","first-page":"2990","volume-title":"ICML","author":"Cohen Taco","year":"2016","unstructured":"Taco Cohen and Max Welling. 2016. Group equivariant convolutional networks. In ICML. PMLR, 2990\u20132999."},{"key":"e_1_3_1_11_2","first-page":"2189","article-title":"Environment inference for invariant learning","author":"Creager Elliot","year":"2021","unstructured":"Elliot Creager, J\u00f6rn-Henrik Jacobsen, and Richard Zemel. 2021. Environment inference for invariant learning. In ICML. 2189\u20132200.","journal-title":"ICML"},{"key":"e_1_3_1_12_2","article-title":"Equivariant contrastive learning","author":"Dangovski Rumen","year":"2022","unstructured":"Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, and Marin Solja\u010di\u0107. 2022. Equivariant contrastive learning. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_13_2","unstructured":"Kaize Ding Zhe Xu Hanghang Tong and Huan Liu. 2022. Data augmentation for deep graph learning: A survey. arXiv:2202.08235. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2202.08235"},{"key":"e_1_3_1_14_2","article-title":"A closer Look at distribution shifts and out-of-distribution generalization on graphs","author":"Ding Mucong","year":"2021","unstructured":"Mucong Ding, Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, and Tom Goldstein. 2021. A closer Look at distribution shifts and out-of-distribution generalization on graphs. In NeurIPSW.","journal-title":"NeurIPSW"},{"key":"e_1_3_1_15_2","unstructured":"Vijay Prakash Dwivedi Chaitanya K. Joshi Thomas Laurent Yoshua Bengio and Xavier Bresson. 2020. Benchmarking graph neural networks. arXiv:2003.00982. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2003.00982"},{"key":"e_1_3_1_16_2","article-title":"Debiasing graph neural networks Via learning disentangled causal substructure","author":"Fan Shaohua","year":"2022","unstructured":"Shaohua Fan, Xiao Wang, Yanhu Mo, Chuan Shi, and Jian Tang. 2022. Debiasing graph neural networks Via learning disentangled causal substructure. In NeurIPS.","journal-title":"NeurIPS"},{"key":"e_1_3_1_17_2","unstructured":"Shaohua Fan Xiao Wang Chuan Shi Peng Cui and Bai Wang. 2021. Generalizing graph neural networks on out-of-distribution graphs. arXiv:2111.10657. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2111.10657"},{"key":"e_1_3_1_18_2","first-page":"721","article-title":"Exgc: Bridging efficiency and explainability in graph condensation","author":"Fang Junfeng","year":"2024","unstructured":"Junfeng Fang, Xinglin Li, Yongduo Sui, Yuan Gao, Guibin Zhang, Kun Wang, Xiang Wang, and Xiangnan He. 2024. Exgc: Bridging efficiency and explainability in graph condensation. In WWW, 721\u2013732.","journal-title":"WWW"},{"key":"e_1_3_1_19_2","first-page":"2083","article-title":"Graph U-nets","author":"Gao Hongyang","year":"2019","unstructured":"Hongyang Gao and Shuiwang Ji. 2019. Graph U-nets. In ICML, 2083\u20132092.","journal-title":"ICML"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-020-00257-z"},{"key":"e_1_3_1_21_2","unstructured":"Shurui Gui Xiner Li Limei Wang and Shuiwang Ji. 2022. GOOD: A graph out-of-distribution benchmark. arXiv:2206.08452. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2206.08452"},{"key":"e_1_3_1_22_2","unstructured":"Hongyu Guo and Yongyi Mao. 2021. Intrusion-free graph mixup. arXiv:2110.09344. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2110.09344"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i6.25941"},{"key":"e_1_3_1_24_2","first-page":"8230","article-title":"G-mixup: Graph data augmentation for graph classification","author":"Han Xiaotian","year":"2022","unstructured":"Xiaotian Han, Zhimeng Jiang, Ninghao Liu, and Xia Hu. 2022. G-mixup: Graph data augmentation for graph classification. In ICML, 8230\u20138248.","journal-title":"ICML"},{"key":"e_1_3_1_25_2","article-title":"Open graph benchmark: Datasets for machine learning on graphs","author":"Hu Weihua","year":"2020","unstructured":"Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS.","journal-title":"NeurIPS"},{"key":"e_1_3_1_26_2","article-title":"How to find your friendly neighborhood: Graph attention design with Self-supervision","author":"Kim Dongkwan","year":"2020","unstructured":"Dongkwan Kim and Alice Oh. 2020. How to find your friendly neighborhood: Graph attention design with Self-supervision. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_27_2","unstructured":"Junghurn Kim Sukwon Yun and Chanyoung Park. 2023. S-mixup: Structural mixup for graph neural networks. arXiv:2308.08097. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2308.08097"},{"key":"e_1_3_1_28_2","article-title":"Semi-supervised classification with graph convolutional networks","author":"Kipf Thomas N.","year":"2017","unstructured":"Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_29_2","first-page":"60","article-title":"Robust optimization as data augmentation for large-scale graphs","author":"Kong Kezhi","year":"2022","unstructured":"Kezhi Kong, Guohao Li, Mucong Ding, Zuxuan Wu, Chen Zhu, Bernard Ghanem, Gavin Taylor, and Tom Goldstein. 2022. Robust optimization as data augmentation for large-scale graphs. In CVPR, 60\u201369.","journal-title":"CVPR"},{"key":"e_1_3_1_30_2","first-page":"5815","article-title":"Out-of-distribution generalization via risk extrapolation (Rex)","author":"Krueger David","year":"2021","unstructured":"David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. 2021. Out-of-distribution generalization via risk extrapolation (Rex). In ICML, 5815\u20135826.","journal-title":"ICML"},{"key":"e_1_3_1_31_2","first-page":"3734","article-title":"Self-attention graph pooling","author":"Lee Junhyun","year":"2019","unstructured":"Junhyun Lee, Inyeop Lee, and Jaewoo Kang. 2019. Self-attention graph pooling. In ICML, 3734\u20133743.","journal-title":"ICML"},{"key":"e_1_3_1_32_2","unstructured":"Haoyang Li Xin Wang Ziwei Zhang and Wenwu Zhu. 2022. OOD-GNN: Out-of-distribution generalized graph neural network. arXiv:2112.03806. Retrieved from https:\/\/arxiv.org\/abs\/2112.03806"},{"key":"e_1_3_1_33_2","unstructured":"Haoyang Li Xin Wang Ziwei Zhang and Wenwu Zhu. 2022. Out-of-distribution generalization on graphs: A survey. arXiv:2202.07987. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2202.07987"},{"key":"e_1_3_1_34_2","article-title":"Learning invariant graph representations for out-of-distribution generalization","author":"Li Haoyang","year":"2022","unstructured":"Haoyang Li, Ziwei Zhang, Xin Wang, and Wenwu Zhu. 2022. Learning invariant graph representations for out-of-distribution generalization. In NeurIPS.","journal-title":"NeurIPS"},{"issue":"1","key":"e_1_3_1_35_2","first-page":"1","article-title":"Invariant node representation learning under distribution shifts with multiple latent environments","volume":"42","author":"Li Haoyang","year":"2023","unstructured":"Haoyang Li, Ziwei Zhang, Xin Wang, and Wenwu Zhu. 2023. Invariant node representation learning under distribution shifts with multiple latent environments. ACM Transactions on Information Systems 42, 1 (2023), 1\u201330.","journal-title":"ACM Transactions on Information Systems"},{"key":"e_1_3_1_36_2","volume-title":"ICLR","author":"Li Sihang","year":"2024","unstructured":"Sihang Li, Zhiyuan Liu, Yanchen Luo, Xiang Wang, Xiangnan He, Kenji Kawaguchi, Tat-Seng Chua, and Qi Tian. 2024. Towards 3D molecule-text interpretation in language models. In ICLR. OpenReview.net."},{"key":"e_1_3_1_37_2","article-title":"Gated graph sequence neural networks","author":"Li Yujia","year":"2016","unstructured":"Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. 2016. Gated graph sequence neural networks. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.01118"},{"key":"e_1_3_1_39_2","first-page":"6781","volume-title":"ICML","author":"Liu Evan Z.","year":"2021","unstructured":"Evan Z. Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. 2021. Just train twice: Improving group robustness without training group information. In ICML. PMLR, 6781\u20136792."},{"key":"e_1_3_1_40_2","first-page":"1069","article-title":"Graph rationalization with environment-based augmentations","author":"Liu Gang","year":"2022","unstructured":"Gang Liu, Tong Zhao, Jiaxin Xu, Tengfei Luo, and Meng Jiang. 2022. Graph rationalization with environment-based augmentations. In SIGKDD, 1069\u20131078.","journal-title":"SIGKDD"},{"key":"e_1_3_1_41_2","article-title":"One for all: Towards training one graph model for all classification tasks","author":"Liu Hao","year":"2024","unstructured":"Hao Liu, Jiarui Feng, Lecheng Kong, Ningyue Liang, Dacheng Tao, Yixin Chen, and Muhan Zhang. 2024. One for all: Towards training one graph model for all classification tasks. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_42_2","first-page":"1548","article-title":"FLOOD: A flexible invariant learning framework for out-of-distribution generalization on graphs","author":"Liu Yang","year":"2023","unstructured":"Yang Liu, Xiang Ao, Fuli Feng, Yunshan Ma, Kuan Li, Tat-Seng Chua, and Qing He. 2023. FLOOD: A flexible invariant learning framework for out-of-distribution generalization on graphs. In SIGKDD, 1548\u20131558.","journal-title":"SIGKDD"},{"key":"e_1_3_1_43_2","first-page":"15623","article-title":"MolCA: Molecular graph-language modeling with Cross-modal projector and uni-modal adapter","author":"Liu Zhiyuan","year":"2023","unstructured":"Zhiyuan Liu, Sihang Li, Yanchen Luo, Hao Fei, Yixin Cao, Kenji Kawaguchi, Xiang Wang, and Tat-Seng Chua. 2023. MolCA: Molecular graph-language modeling with Cross-modal projector and uni-modal adapter. In EMNLP. Association for Computational Linguistics, 15623\u201315638.","journal-title":"EMNLP"},{"key":"e_1_3_1_44_2","first-page":"5353","article-title":"ReactXT: Understanding molecular \u201creaction-Ship\u201d via reaction-contextualized molecule-text pretraining","author":"Liu Zhiyuan","year":"2024","unstructured":"Zhiyuan Liu, Yaorui Shi, An Zhang, Sihang Li, Enzhi Zhang, Xiang Wang, Kenji Kawaguchi, and Tat-Seng Chua. 2024. ReactXT: Understanding molecular \u201creaction-Ship\u201d via reaction-contextualized molecule-text pretraining. In ACL (Findings). Association for Computational Linguistics, 5353\u20135377.","journal-title":"ACL (Findings)"},{"key":"e_1_3_1_45_2","unstructured":"Zhiyuan Liu Yaorui Shi An Zhang Enzhi Zhang Kenji Kawaguchi Xiang Wang and Tat-Seng Chua. 2023. Rethinking tokenizer and decoder in masked graph modeling for molecules. In NeurIPS. Retrieved from https:\/\/openreview.net\/forum?id=fWLf8DV0fI"},{"key":"e_1_3_1_46_2","first-page":"5949","article-title":"ProtT3: Protein-to-text generation for text-based protein understanding","author":"Liu Zhiyuan","year":"2024","unstructured":"Zhiyuan Liu, An Zhang, Hao Fei, Enzhi Zhang, Xiang Wang, Kenji Kawaguchi, and Tat-Seng Chua. 2024c. ProtT3: Protein-to-text generation for text-based protein understanding. In ACL, 5949\u20135966.","journal-title":"ACL"},{"key":"e_1_3_1_47_2","unstructured":"Zhiyuan Liu An Zhang Yu Sun Yicong Li Yaorui Shi Sihang Li Xiang Wang Xiangnan He and Tat-Seng Chua. 2023d. Towards Equivariant Graph Contrastive Learning via Cross-Graph Augmentation. Retrieved from https:\/\/openreview.net\/forum?id=9L1Ts8t66YK"},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE60146.2024.00203"},{"key":"e_1_3_1_49_2","article-title":"Invariant and equivariant graph networks","author":"Maron Haggai","year":"2019","unstructured":"Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. 2019. Invariant and equivariant graph networks. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_50_2","first-page":"15524","article-title":"Interpretable and generalizable graph learning Via stochastic attention mechanism","author":"Miao Siqi","year":"2022","unstructured":"Siqi Miao, Miaoyuan Liu, and Pan Li. 2022. Interpretable and generalizable graph learning Via stochastic attention mechanism. In ICML. 15524\u201315543.","journal-title":"ICML"},{"key":"e_1_3_1_51_2","article-title":"Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs","author":"Murphy Ryan L","year":"2018","unstructured":"Ryan L Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. 2018. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2019.2947204"},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","DOI":"10.1037\/a0036434"},{"key":"e_1_3_1_54_2","volume-title":"Models, Reasoning and Inference","author":"Pearl Judea","year":"2000","unstructured":"Judea Pearl et al. 2000. Models, Reasoning and Inference. Cambridge University Press, Cambridge, UK."},{"key":"e_1_3_1_55_2","article-title":"Frame averaging for invariant and equivariant network design","author":"Puny Omri","year":"2022","unstructured":"Omri Puny, Matan Atzmon, Edward J Smith, Ishan Misra, Aditya Grover, Heli Ben-Hamu, and Yaron Lipman. 2022. Frame averaging for invariant and equivariant network design. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.5555\/3291125.3291161"},{"key":"e_1_3_1_57_2","article-title":"Dropedge: Towards deep graph convolutional networks on node classification","author":"Rong Yu","year":"2020","unstructured":"Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. 2020. Dropedge: Towards deep graph convolutional networks on node classification. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_58_2","article-title":"Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization","author":"Sagawa Shiori","year":"2020","unstructured":"Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. 2020. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_59_2","volume-title":"Group Theory","author":"Scott William Raymond","year":"2012","unstructured":"William Raymond Scott. 2012. Group Theory. Courier Corporation."},{"key":"e_1_3_1_60_2","unstructured":"Zheyan Shen Jiashuo Liu Yue He Xingxuan Zhang Renzhe Xu Han Yu and Peng Cui. 2021. Towards out-of-distribution generalization: A survey. arXiv:2108.13624. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2108.13624"},{"key":"e_1_3_1_61_2","doi-asserted-by":"publisher","DOI":"10.1145\/3644392"},{"key":"e_1_3_1_62_2","first-page":"2552","article-title":"Invariant graph learning for causal effect estimation","author":"Sui Yongduo","year":"2024","unstructured":"Yongduo Sui, Caizhi Tang, Zhixuan Chu, Junfeng Fang, Yuan Gao, Qing Cui, Longfei Li, Jun Zhou, and Xiang Wang. 2024. Invariant graph learning for causal effect estimation. In WWW, 2552\u20132562.","journal-title":"WWW"},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11390-023-2583-5"},{"key":"e_1_3_1_64_2","first-page":"1696","article-title":"Causal attention for interpretable and generalizable graph classification","author":"Sui Yongduo","year":"2022","unstructured":"Yongduo Sui, Xiang Wang, Jiancan Wu, Xiangnan He, and Tat-Seng Chua. 2022. Causal attention for interpretable and generalizable graph classification. In SIGKDD, 1696\u20131705.","journal-title":"SIGKDD"},{"key":"e_1_3_1_65_2","unstructured":"Yongduo Sui Xiang Wang Jiancan Wu An Zhang and Xiangnan He. 2022. Adversarial causal augmentation for graph covariate shift. arXiv:2211.02843. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2211.02843"},{"key":"e_1_3_1_66_2","article-title":"Unleashing the power of graph data augmentation on covariate distribution shift","author":"Sui Yongduo","year":"2023","unstructured":"Yongduo Sui, Qitian Wu, Jiancan Wu, Qing Cui, Longfei Li, Jun Zhou, Xiang Wang, and Xiangnan He. 2023. Unleashing the power of graph data augmentation on covariate distribution shift. In NeurIPS.","journal-title":"NeurIPS"},{"key":"e_1_3_1_67_2","first-page":"15920","article-title":"Adversarial graph augmentation to improve graph contrastive learning","author":"Suresh Susheel","year":"2021","unstructured":"Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. 2021. Adversarial graph augmentation to improve graph contrastive learning. In NeurIPS, 15920\u201315933.","journal-title":"NeurIPS"},{"key":"e_1_3_1_68_2","unstructured":"Kiran K Thekumparampil Chong Wang Sewoong Oh and Li-Jia Li. 2018. Attention-based graph neural network for semi-supervised learning. arXiv:1803.03735. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:1803.03735"},{"key":"e_1_3_1_69_2","article-title":"Graph attention networks","author":"Veli\u010dkovi\u0107 Petar","year":"2018","unstructured":"Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph attention networks. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_70_2","first-page":"3745","article-title":"Unleashing the power of knowledge graph for recommendation via invariant learning","author":"Wang Shuyao","year":"2024","unstructured":"Shuyao Wang, Yongduo Sui, Chao Wang, and Hui Xiong. 2024. Unleashing the power of knowledge graph for recommendation via invariant learning. In WWW, 3745\u20133755.","journal-title":"WWW"},{"key":"e_1_3_1_71_2","first-page":"740","article-title":"Dynamic sparse learning: A novel paradigm for efficient recommendation","author":"Wang Shuyao","year":"2024","unstructured":"Shuyao Wang, Yongduo Sui, Jiancan Wu, Zhi Zheng, and Hui Xiong. 2024. Dynamic sparse learning: A novel paradigm for efficient recommendation. In WSDM, 740\u2013749.","journal-title":"WSDM"},{"key":"e_1_3_1_72_2","doi-asserted-by":"publisher","DOI":"10.1002\/int.22827"},{"key":"e_1_3_1_73_2","first-page":"3663","article-title":"Mixup for node and graph classification","author":"Wang Yiwei","year":"2021","unstructured":"Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, and Bryan Hooi. 2021. Mixup for node and graph classification. In WWW, 3663\u20133674.","journal-title":"WWW"},{"key":"e_1_3_1_74_2","article-title":"A fine-grained analysis on distribution shift","author":"Wiles Olivia","year":"2022","unstructured":"Olivia Wiles, Sven Gowal, Florian Stimberg, Sylvestre Alvise-Rebuffi, Ira Ktena, Taylan Cemgil, and Taylan Cemgil. 2022. A fine-grained analysis on distribution shift. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_75_2","article-title":"Handling distribution shifts on graphs: An invariance perspective","author":"Wu Qitian","year":"2022","unstructured":"Qitian Wu, Hengrui Zhang, Junchi Yan, and David Wipf. 2022. Handling distribution shifts on graphs: An invariance perspective. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_76_2","article-title":"Discovering invariant rationales for graph neural networks","author":"Wu Yingxin","year":"2022","unstructured":"Yingxin Wu, Xiang Wang, An Zhang, Xiangnan He, and Tat-Seng Chua. 2022. Discovering invariant rationales for graph neural networks. In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_77_2","doi-asserted-by":"publisher","DOI":"10.1039\/C7SC02664A"},{"key":"e_1_3_1_78_2","article-title":"How powerful are graph neural networks?","author":"Xu Keyulu","year":"2019","unstructured":"Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful are graph neural networks? In ICLR.","journal-title":"ICLR"},{"key":"e_1_3_1_79_2","article-title":"Learning substructure invariance for out-of-distribution molecular representations","author":"Yang Nianzu","year":"2022","unstructured":"Nianzu Yang, Kaipeng Zeng, Qitian Wu, Xiaosong Jia, and Junchi Yan. 2022. Learning substructure invariance for out-of-distribution molecular representations. In NeurIPS.","journal-title":"NeurIPS"},{"key":"e_1_3_1_80_2","first-page":"7947","article-title":"OoD-Bench: Quantifying and understanding Two dimensions of out-of-distribution generalization","author":"Ye Nanyang","year":"2022","unstructured":"Nanyang Ye, Kaican Li, Haoyue Bai, Runpeng Yu, Lanqing Hong, Fengwei Zhou, Zhenguo Li, and Jun Zhu. 2022. OoD-Bench: Quantifying and understanding Two dimensions of out-of-distribution generalization. In CVPR, 7947\u20137958.","journal-title":"CVPR"},{"key":"e_1_3_1_81_2","first-page":"9240","article-title":"GNNExplainer: Generating explanations for graph neural networks","author":"Ying Zhitao","year":"2019","unstructured":"Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. 2019. GNNExplainer: Generating explanations for graph neural networks. In NeurIPS, 9240\u20139251.","journal-title":"NeurIPS"},{"key":"e_1_3_1_82_2","first-page":"4805","article-title":"Hierarchical graph representation learning with differentiable pooling","author":"Ying Zhitao","year":"2018","unstructured":"Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, and Jure Leskovec. 2018. Hierarchical graph representation learning with differentiable pooling. In NeurIPS. 4805\u20134815.","journal-title":"NeurIPS"},{"key":"e_1_3_1_83_2","first-page":"1281","article-title":"Model-agnostic augmentation for accurate graph classification","author":"Yoo Jaemin","year":"2022","unstructured":"Jaemin Yoo, Sooyeon Shim, and U Kang. 2022. Model-agnostic augmentation for accurate graph classification. In WWW, 1281\u20131291.","journal-title":"WWW"},{"key":"e_1_3_1_84_2","article-title":"Graph contrastive learning with augmentations","author":"You Yuning","year":"2020","unstructured":"Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. In NeurIPS.","journal-title":"NeurIPS"},{"key":"e_1_3_1_85_2","volume-title":"ICML","author":"Zhang Guibin","year":"2024","unstructured":"Guibin Zhang, Yanwei Yue, Kun Wang, Junfeng Fang, Yongduo Sui, Kai Wang, Yuxuan Liang, Dawei Cheng, Shirui Pan, and Tianlong Chen. 2024. Two heads are better than one: Boosting graph sparse training Via semantic and topological awareness. In ICML. PMLR."},{"key":"e_1_3_1_86_2","unstructured":"Hongyi Zhang Moustapha Cisse Yann N Dauphin and David Lopez-Paz. 2017. mixup: Beyond empirical risk minimization. arXiv:1710.09412. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:1710.09412"},{"key":"e_1_3_1_87_2","article-title":"An end-to-end deep learning architecture for graph classification","author":"Zhang Muhan","year":"2018","unstructured":"Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. 2018. An end-to-end deep learning architecture for graph classification. In AAAI.","journal-title":"AAAI"},{"key":"e_1_3_1_88_2","unstructured":"Tong Zhao Gang Liu Stephan G\u00fcnnemann and Meng Jiang. 2022. Graph data augmentation for graph machine learning: A survey. arXiv:2202.08871. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2202.08871"},{"key":"e_1_3_1_89_2","unstructured":"You Zhou Xiujing Lin Xiang Zhang Maolin Wang Gangwei Jiang Huakang Lu Yupeng Wu Kai Zhang Zhe Yang Kehang Wang et al. 2023. On the opportunities of Green computing: A survey. arXiv:2311.00447. Retrieved from https:\/\/arxiv.org\/abs\/arXiv:2311.00447"}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3706062","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3706062","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:18:13Z","timestamp":1750295893000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3706062"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,2,14]]},"references-count":88,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,2,28]]}},"alternative-id":["10.1145\/3706062"],"URL":"https:\/\/doi.org\/10.1145\/3706062","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"value":"1556-4681","type":"print"},{"value":"1556-472X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,2,14]]},"assertion":[{"value":"2023-09-14","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-10","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-02-14","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}