{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T15:17:43Z","timestamp":1775229463337,"version":"3.50.1"},"reference-count":87,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2024,3,26]],"date-time":"2024-03-26T00:00:00Z","timestamp":1711411200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["SIGKDD Explor. Newsl."],"published-print":{"date-parts":[[2024,3,26]]},"abstract":"<jats:p>Learning on Graphs has attracted immense attention due to its wide real-world applications. The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding. In recent years, Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities that have revolutionized existing workflows to handle text data. In this paper, we aim to explore the potential of LLMs in graph machine learning, especially the node classification task, and investigate two possible pipelines: LLMs-as-Enhancers and LLMs-as-Predictors. The former leverages LLMs to enhance nodes' text attributes with their massive knowledge and then generate predictions through GNNs. The latter attempts to directly employ LLMs as standalone predictors. We conduct comprehensive and systematical studies on these two pipelines under various settings. From comprehensive empirical results, we make original observations and find new insights that open new possibilities and suggest promising directions to leverage LLMs for learning on graphs. Our codes and datasets are available at: https:\/\/github.com\/CurryTang\/Graph-LLM .<\/jats:p>","DOI":"10.1145\/3655103.3655110","type":"journal-article","created":{"date-parts":[[2024,3,28]],"date-time":"2024-03-28T10:10:58Z","timestamp":1711620658000},"page":"42-61","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":144,"title":["Exploring the Potential of Large Language Models (LLMs)in Learning on Graphs"],"prefix":"10.1145","volume":"25","author":[{"given":"Zhikai","family":"Chen","sequence":"first","affiliation":[{"name":"Michigan State University"}]},{"given":"Haitao","family":"Mao","sequence":"additional","affiliation":[{"name":"Michigan State University"}]},{"given":"Hang","family":"Li","sequence":"additional","affiliation":[{"name":"Michigan State University"}]},{"given":"Wei","family":"Jin","sequence":"additional","affiliation":[{"name":"Emory University"}]},{"given":"Hongzhi","family":"Wen","sequence":"additional","affiliation":[{"name":"Michigan State University"}]},{"given":"Xiaochi","family":"Wei","sequence":"additional","affiliation":[{"name":"Baidu Inc."}]},{"given":"Shuaiqiang","family":"Wang","sequence":"additional","affiliation":[{"name":"Baidu Inc."}]},{"given":"Dawei","family":"Yin","sequence":"additional","affiliation":[{"name":"Baidu Inc."}]},{"given":"Wenqi","family":"Fan","sequence":"additional","affiliation":[{"name":"The Hong Kong Polytechnic University"}]},{"given":"Hui","family":"Liu","sequence":"additional","affiliation":[{"name":"Michigan State University"}]},{"given":"Jiliang","family":"Tang","sequence":"additional","affiliation":[{"name":"Michigan State University"}]}],"member":"320","published-online":{"date-parts":[[2024,3,28]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Palm 2 technical report. arXiv preprint arXiv:2305.10403","author":"Anil R.","year":"2023","unstructured":"R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023."},{"key":"e_1_2_1_2_1","volume-title":"Sparks of artificial general intelligence: Early experiments with gpt-4. ArXiv, abs\/2303.12712","author":"Bubeck S.","year":"2023","unstructured":"S. Bubeck, V. Chandrasekaran, R. Eldan, J. A. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y.-F. Li, S. M. Lundberg, H. Nori, H. Palangi, M. T. Ribeiro, and Y. Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4. ArXiv, abs\/2303.12712, 2023."},{"key":"e_1_2_1_3_1","volume-title":"Graphllm: Boosting graph reasoning ability of large language model. arXiv preprint arXiv:2310.05845","author":"Chai Z.","year":"2023","unstructured":"Z. Chai, T. Zhang, L. Wu, K. Han, X. Hu, X. Huang, and Y. Yang. Graphllm: Boosting graph reasoning ability of large language model. arXiv preprint arXiv:2310.05845, 2023."},{"key":"e_1_2_1_4_1","volume-title":"Label-free node classification on graphs with large language models (llms). arXiv preprint arXiv:2310.04668","author":"Chen Z.","year":"2023","unstructured":"Z. Chen, H. Mao, H. Wen, H. Han, W. Jin, H. Zhang, H. Liu, and J. Tang. Label-free node classification on graphs with large language models (llms). arXiv preprint arXiv:2310.04668, 2023."},{"key":"e_1_2_1_5_1","volume-title":"Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining","author":"Chiang W.-L.","year":"2019","unstructured":"W.-L. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C.-J. Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019."},{"key":"e_1_2_1_6_1","volume-title":"ICLR 2022","author":"Chien E.","year":"2022","unstructured":"E. Chien, W.-C. Chang, C.-J. Hsieh, H.-F. Yu, J. Zhang, O. Milenkovic, and I. S. Dhillon. Node feature extraction by self-supervised multi-scale neighborhood prediction. In ICLR 2022, 2022."},{"key":"e_1_2_1_7_1","volume-title":"The Eleventh International Conference on Learning Representations","author":"Creswell A.","year":"2023","unstructured":"A. Creswell, M. Shanahan, and I. Higgins. Selectioninference: Exploiting large language models for interpretable logical reasoning. In The Eleventh International Conference on Learning Representations, 2023."},{"key":"e_1_2_1_8_1","doi-asserted-by":"crossref","first-page":"227","DOI":"10.1145\/3447548.3467364","volume-title":"Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD '21","author":"Dai E.","year":"2021","unstructured":"E. Dai, C. Aggarwal, and S. Wang. Nrgnn: Learning a label noise resistant graph neural network on sparsely and noisily labeled graphs. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD '21, page 227--236, New York, NY, USA, 2021. Association for Computing Machinery."},{"key":"e_1_2_1_9_1","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","volume":"1","author":"Devlin J.","year":"2019","unstructured":"J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171--4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics."},{"key":"e_1_2_1_10_1","volume-title":"Simteg: A frustratingly simple approach improves textual graph learning. arXiv preprint arXiv:2308.02565","author":"Duan K.","year":"2023","unstructured":"K. Duan, Q. Liu, T.-S. Chua, S. Yan, W. T. Ooi, Q. Xie, and J. He. Simteg: A frustratingly simple approach improves textual graph learning. arXiv preprint arXiv:2308.02565, 2023."},{"key":"e_1_2_1_11_1","volume-title":"Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track","author":"Dwivedi V. P.","year":"2022","unstructured":"V. P. Dwivedi, L. Ramp\u00b4a'sek, M. Galkin, A. Parviz, G. Wolf, A. T. Luu, and D. Beaini. Long range graph benchmark. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022."},{"key":"e_1_2_1_12_1","doi-asserted-by":"crossref","first-page":"55","DOI":"10.18653\/v1\/D19-1006","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP)","author":"Ethayarajh K.","year":"2019","unstructured":"K. Ethayarajh. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 55--65, Hong Kong, China, Nov. 2019. Association for Computational Linguistics."},{"key":"e_1_2_1_13_1","volume-title":"Fast graph representation learning with pytorch geometric. ArXiv, abs\/1903.02428","author":"Fey M.","year":"2019","unstructured":"M. Fey and J. E. Lenssen. Fast graph representation learning with pytorch geometric. ArXiv, abs\/1903.02428, 2019."},{"key":"e_1_2_1_14_1","volume-title":"Chat-rec: Towards interactive and explainable llms-augmented recommender system. ArXiv, abs\/2303.14524","author":"Gao Y.","year":"2023","unstructured":"Y. Gao, T. Sheng, Y. Xiang, Y. Xiong, H. Wang, and J. Zhang. Chat-rec: Towards interactive and explainable llms-augmented recommender system. ArXiv, abs\/2303.14524, 2023."},{"key":"e_1_2_1_15_1","doi-asserted-by":"crossref","first-page":"89","DOI":"10.1145\/276675.276685","volume-title":"Proceedings of the Third ACM Conference on Digital Libraries, DL \u00b498","author":"Giles C. L.","year":"1998","unstructured":"C. L. Giles, K. D. Bollacker, and S. Lawrence. Citeseer: An automatic citation indexing system. In Proceedings of the Third ACM Conference on Digital Libraries, DL \u00b498, pages 89--98, New York, NY, USA, 1998. ACM."},{"key":"e_1_2_1_16_1","volume-title":"Neural message passing for quantum chemistry. ArXiv, abs\/1704.01212","author":"Gilmer J.","year":"2017","unstructured":"J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. ArXiv, abs\/1704.01212, 2017."},{"key":"e_1_2_1_17_1","volume-title":"Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track","author":"Gui S.","year":"2022","unstructured":"S. Gui, X. Li, L. Wang, and S. Ji. GOOD: A graph outof- distribution benchmark. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022."},{"key":"e_1_2_1_18_1","volume-title":"Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066","author":"Guo J.","year":"2023","unstructured":"J. Guo, L. Du, and H. Liu. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066, 2023."},{"key":"e_1_2_1_19_1","volume-title":"NIPS","author":"Hamilton W. L.","year":"2017","unstructured":"W. L. Hamilton, R. Ying, and J. Leskovec. Inductive representation learning on large graphs. In NIPS, 2017."},{"key":"e_1_2_1_20_1","volume-title":"Distributional structure. Word, 10(2- 3): 146--162","author":"Harris Z. S.","year":"1954","unstructured":"Z. S. Harris. Distributional structure. Word, 10(2- 3):146--162, 1954."},{"key":"e_1_2_1_21_1","volume-title":"Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654","author":"He P.","year":"2020","unstructured":"P. He, X. Liu, J. Gao, and W. Chen. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654, 2020."},{"key":"e_1_2_1_22_1","volume-title":"Explanations as features: Llm-based features for text-attributed graphs. arXiv preprint arXiv:2305.19523","author":"He X.","year":"2023","unstructured":"X. He, X. Bresson, T. Laurent, and B. Hooi. Explanations as features: Llm-based features for text-attributed graphs. arXiv preprint arXiv:2305.19523, 2023."},{"key":"e_1_2_1_23_1","volume-title":"Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118--22133","author":"Hu W.","year":"2020","unstructured":"W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118--22133, 2020."},{"key":"e_1_2_1_24_1","volume-title":"Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining","author":"Hu Z.","year":"2020","unstructured":"Z. Hu, Y. Dong, K. Wang, K.-W. Chang, and Y. Sun. Gpt-gnn: Generative pre-training of graph neural networks. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020."},{"key":"e_1_2_1_25_1","volume-title":"Can llms effectively leverage graph structural information: When and why. arXiv preprint arXiv:2309.16595","author":"Huang J.","year":"2023","unstructured":"J. Huang, X. Zhang, Q. Mei, and J. Ma. Can llms effectively leverage graph structural information: When and why. arXiv preprint arXiv:2309.16595, 2023."},{"key":"e_1_2_1_26_1","volume-title":"Exploring chatgpt's ability to rank content: A preliminary study on consistency with human preferences. ArXiv, abs\/2303.07610","author":"Ji Y.","year":"2023","unstructured":"Y. Ji, Y. Gong, Y. Peng, C. Ni, P. Sun, D. Pan, B. Ma, and X. Li. Exploring chatgpt's ability to rank content: A preliminary study on consistency with human preferences. ArXiv, abs\/2303.07610, 2023."},{"key":"e_1_2_1_27_1","volume-title":"International Conference on Learning Representations","author":"Kipf T. N.","year":"2017","unstructured":"T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017."},{"key":"e_1_2_1_28_1","first-page":"6437","volume-title":"International conference on machine learning","author":"Li G.","year":"2021","unstructured":"G. Li, M. M\u00a8uller, B. Ghanem, and V. Koltun. Training graph neural networks with 1000 layers. In International conference on machine learning, pages 6437-- 6449. PMLR, 2021."},{"key":"e_1_2_1_29_1","volume-title":"Out-ofdistribution generalization on graphs: A survey. arXiv preprint arXiv:2202.07987","author":"Li H.","year":"2022","unstructured":"H. Li, X. Wang, Z. Zhang, and W. Zhu. Out-ofdistribution generalization on graphs: A survey. arXiv preprint arXiv:2202.07987, 2022."},{"key":"e_1_2_1_30_1","volume-title":"Empowering molecule discovery for moleculecaption translation with large language models: A chatgpt perspective. ArXiv, abs\/2306.06615","author":"Li J.","year":"2023","unstructured":"J. Li, Y. Liu, W. Fan, X. Wei, H. Liu, J. Tang, and Q. Li. Empowering molecule discovery for moleculecaption translation with large language models: A chatgpt perspective. ArXiv, abs\/2306.06615, 2023."},{"key":"e_1_2_1_31_1","first-page":"2024","volume-title":"Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22","author":"Li Q.","year":"2022","unstructured":"Q. Li, X. Li, L. Chen, and D. Wu. Distilling knowledge on text graph for social media attribute inference. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22, page 2024--2028, New York, NY, USA, 2022. Association for Computing Machinery."},{"issue":"1","key":"e_1_2_1_32_1","doi-asserted-by":"crossref","first-page":"228","DOI":"10.1007\/s10618-022-00879-4","article-title":"Informative pseudo-labeling for graph neural networks with few labels","volume":"37","author":"Li Y.","year":"2023","unstructured":"Y. Li, J. Yin, and L. Chen. Informative pseudo-labeling for graph neural networks with few labels. Data Mining and Knowledge Discovery, 37(1):228--254, 2023.","journal-title":"Data Mining and Knowledge Discovery"},{"key":"e_1_2_1_33_1","volume-title":"One for all: Towards training one graph model for all classification tasks. arXiv preprint arXiv:2310.00149","author":"Liu H.","year":"2023","unstructured":"H. Liu, J. Feng, L. Kong, N. Liang, D. Tao, Y. Chen, and M. Zhang. One for all: Towards training one graph model for all classification tasks. arXiv preprint arXiv:2310.00149, 2023."},{"key":"e_1_2_1_34_1","doi-asserted-by":"crossref","first-page":"1248","DOI":"10.1145\/3485447.3512172","volume-title":"Proceedings of the ACM Web Conference 2022, WWW '22","author":"Liu H.","year":"2022","unstructured":"H. Liu, B. Hu, X. Wang, C. Shi, Z. Zhang, and J. Zhou. Confidence may cheat: Self-training on graph neural networks under distribution shift. In Proceedings of the ACM Web Conference 2022, WWW '22, page 1248--1258, New York, NY, USA, 2022. Association for Computing Machinery."},{"key":"e_1_2_1_35_1","volume-title":"Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149","author":"Liu J.","year":"2023","unstructured":"J. Liu, C. Liu, R. Lv, K. Zhou, and Y. Zhang. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149, 2023."},{"key":"e_1_2_1_36_1","volume-title":"AAAI Conference on Artificial Intelligence","author":"Liu W.","year":"2019","unstructured":"W. Liu, P. Zhou, Z. Zhao, Z. Wang, Q. Ju, H. Deng, and P.Wang. K-bert: Enabling language representation with knowledge graph. In AAAI Conference on Artificial Intelligence, 2019."},{"key":"e_1_2_1_37_1","doi-asserted-by":"crossref","first-page":"7342","DOI":"10.18653\/v1\/2020.acl-main.655","volume-title":"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics","author":"Liu Z.","year":"2020","unstructured":"Z. Liu, C. Xiong, M. Sun, and Z. Liu. Fine-grained fact verification with kernel graph attention network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7342--7351, Online, July 2020. Association for Computational Linguistics."},{"key":"e_1_2_1_38_1","doi-asserted-by":"crossref","DOI":"10.1017\/9781108924184","volume-title":"Deep Learning on Graphs","author":"Ma Y.","year":"2021","unstructured":"Y. Ma and J. Tang. Deep Learning on Graphs. Cambridge University Press, 2021."},{"key":"e_1_2_1_39_1","volume-title":"Demystifying structural disparity in graph neural networks: Can one size fit all? arXiv preprint arXiv:2306.01323","author":"Mao H.","year":"2023","unstructured":"H. Mao, Z. Chen, W. Jin, H. Han, Y. Ma, T. Zhao, N. Shah, and J. Tang. Demystifying structural disparity in graph neural networks: Can one size fit all? arXiv preprint arXiv:2306.01323, 2023."},{"key":"e_1_2_1_40_1","doi-asserted-by":"crossref","first-page":"127","DOI":"10.1023\/A:1009953814988","article-title":"Automating the construction of internet portals with machine learning","volume":"3","author":"McCallum A.","year":"2000","unstructured":"A. McCallum, K. Nigam, J. D. M. Rennie, and K. Seymore. Automating the construction of internet portals with machine learning. Information Retrieval, 3:127-- 163, 2000.","journal-title":"Information Retrieval"},{"key":"e_1_2_1_41_1","doi-asserted-by":"crossref","first-page":"110","DOI":"10.18653\/v1\/2020.repl4nlp-1.15","volume-title":"Proceedings of the 5th Workshop on Representation Learning for NLP","author":"Miaschi A.","year":"2020","unstructured":"A. Miaschi and F. Dell'Orletta. Contextual and noncontextual word embeddings: an in-depth linguistic investigation. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 110--119, Online, July 2020. Association for Computational Linguistics."},{"key":"e_1_2_1_42_1","volume-title":"Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781","author":"Mikolov T.","year":"2013","unstructured":"T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013."},{"key":"e_1_2_1_43_1","first-page":"2014","volume-title":"Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics","author":"N.","year":"2023","unstructured":"N. Muennigho?, N. Tazi, L. Magne, and N. Reimers. MTEB: Massive text embedding benchmark. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2014--2037, Dubrovnik, Croatia, May 2023. Association for Computational Linguistics."},{"key":"e_1_2_1_44_1","volume-title":"Text and code embeddings by contrastive pre-training. ArXiv, abs\/2201.10005","author":"Neelakantan A.","year":"2022","unstructured":"A. Neelakantan, T. Xu, R. Puri, A. Radford, J. M. Han, J. Tworek, Q. Yuan, N. A. Tezak, J. W. Kim, C. Hallacy, J. Heidecke, P. Shyam, B. Power, T. E. Nekoul, G. Sastry, G. Krueger, D. P. Schnurr, F. P. Such, K. S.- K. Hsu, M. Thompson, T. Khan, T. Sherbakov, J. Jang, P. Welinder, and L. Weng. Text and code embeddings by contrastive pre-training. ArXiv, abs\/2201.10005, 2022."},{"key":"e_1_2_1_45_1","volume-title":"Introducing chatgpt","author":"AI.","year":"2022","unstructured":"OpenAI. Introducing chatgpt, 2022."},{"key":"e_1_2_1_46_1","volume-title":"Gpt-4 technical report. ArXiv, abs\/2303.08774","author":"AI.","year":"2023","unstructured":"OpenAI. Gpt-4 technical report. ArXiv, abs\/2303.08774, 2023."},{"key":"e_1_2_1_47_1","volume-title":"Frozen transformers in language models are effective visual encoder layers. arXiv preprint arXiv:2310.12973","author":"Pang Z.","year":"2023","unstructured":"Z. Pang, Z. Xie, Y. Man, and Y.-X.Wang. Frozen transformers in language models are effective visual encoder layers. arXiv preprint arXiv:2310.12973, 2023."},{"key":"e_1_2_1_48_1","volume-title":"Language models as knowledge bases? ArXiv, abs\/1909.01066","author":"Petroni F.","year":"2019","unstructured":"F. Petroni, T. Rockt\u00a8aschel, P. Lewis, A. Bakhtin, Y. Wu, A. H. Miller, and S. Riedel. Language models as knowledge bases? ArXiv, abs\/1909.01066, 2019."},{"key":"e_1_2_1_49_1","volume-title":"Revisiting embeddings for graph neural networks. ArXiv, abs\/2209.09338","author":"Purchase S.","year":"2022","unstructured":"S. Purchase, A. Zhao, and R. D. Mullins. Revisiting embeddings for graph neural networks. ArXiv, abs\/2209.09338, 2022."},{"key":"e_1_2_1_50_1","volume-title":"Disentangled representation learning with large language models for text-attributed graphs. arXiv preprint arXiv:2310.18152","author":"Qin Y.","year":"2023","unstructured":"Y. Qin, X. Wang, Z. Zhang, and W. Zhu. Disentangled representation learning with large language models for text-attributed graphs. arXiv preprint arXiv:2310.18152, 2023."},{"issue":"1872","key":"e_1_2_1_51_1","first-page":"1897","article-title":"Pre-trained models for natural language processing: A survey","volume":"63","author":"Qiu X.","year":"2020","unstructured":"X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63:1872 -- 1897, 2020.","journal-title":"Science China Technological Sciences"},{"key":"e_1_2_1_52_1","volume-title":"Language models are unsupervised multitask learners","author":"Radford A.","year":"2019","unstructured":"A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. 2019."},{"issue":"140","key":"e_1_2_1_53_1","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified textto- text transformer","volume":"21","author":"Shazeer N.","year":"2020","unstructured":"N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified textto- text transformer. Journal of Machine Learning Research, 21(140):1--67, 2020.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_2_1_54_1","first-page":"3982","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)","author":"Reimers N.","year":"2019","unstructured":"N. Reimers and I. Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982--3992, Hong Kong, China, Nov. 2019. Association for Computational Linguistics."},{"key":"e_1_2_1_55_1","volume-title":"Unravelling graphexchange file formats. ArXiv, abs\/1503.02781","author":"Roughan M.","year":"2015","unstructured":"M. Roughan and S. J. Tuke. Unravelling graphexchange file formats. ArXiv, abs\/1503.02781, 2015."},{"key":"e_1_2_1_56_1","volume-title":"Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761","author":"Schick T.","year":"2023","unstructured":"T. Schick, J. Dwivedi-Yu, R. Dess'?, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023."},{"issue":"3","key":"e_1_2_1_57_1","doi-asserted-by":"crossref","first-page":"93","DOI":"10.1609\/aimag.v29i3.2157","article-title":"Collective classification in network data","volume":"29","author":"Sen P.","year":"2008","unstructured":"P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classification in network data. AI Magazine, 29(3):93, Sep. 2008.","journal-title":"AI Magazine"},{"key":"e_1_2_1_58_1","volume-title":"Scalable and adaptive graph neural networks with self-label-enhanced training. arXiv preprint arXiv:2104.09376","author":"Sun C.","year":"2021","unstructured":"C. Sun, H. Gu, and J. Hu. Scalable and adaptive graph neural networks with self-label-enhanced training. arXiv preprint arXiv:2104.09376, 2021."},{"key":"e_1_2_1_59_1","first-page":"20841","volume-title":"International Conference on Machine Learning","author":"Sun T.","year":"2022","unstructured":"T. Sun, Y. Shao, H. Qian, X. Huang, and X. Qiu. Blackbox tuning for language-model-as-a-service. In International Conference on Machine Learning, pages 20841-- 20855. PMLR, 2022."},{"key":"e_1_2_1_60_1","volume-title":"Text classification via large language models. ArXiv, abs\/2305.08377","author":"Sun X.","year":"2023","unstructured":"X. Sun, X. Li, J. Li, F. Wu, S. Guo, T. Zhang, and G. Wang. Text classification via large language models. ArXiv, abs\/2305.08377, 2023."},{"key":"e_1_2_1_61_1","volume-title":"Ernie: Enhanced representation through knowledge integration. ArXiv, abs\/1904.09223","author":"Sun Y.","year":"2019","unstructured":"Y. Sun, S. Wang, Y. Li, S. Feng, X. Chen, H. Zhang, X. Tian, D. Zhu, H. Tian, and H. Wu. Ernie: Enhanced representation through knowledge integration. ArXiv, abs\/1904.09223, 2019."},{"key":"e_1_2_1_62_1","volume-title":"Graphgpt: Graph instruction tuning for large language models. arXiv preprint arXiv:2310.13023","author":"Tang J.","year":"2023","unstructured":"J. Tang, Y. Yang, W. Wei, L. Shi, L. Su, S. Cheng, D. Yin, and C. Huang. Graphgpt: Graph instruction tuning for large language models. arXiv preprint arXiv:2310.13023, 2023."},{"key":"e_1_2_1_63_1","volume-title":"et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971","author":"Touvron H.","year":"2023","unstructured":"H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi'ere, N. Goyal, E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023."},{"key":"e_1_2_1_64_1","volume-title":"International Conference on Learning Representations","author":"Cucurull G.","year":"2018","unstructured":"G. Cucurull, A. Casanova, A. Romero, P. Li'o, and Y. Bengio. Graph attention networks. In International Conference on Learning Representations, 2018."},{"key":"e_1_2_1_65_1","volume-title":"Can language models solve graph problems in natural language? arXiv preprint arXiv:2305.10037","author":"Wang H.","year":"2023","unstructured":"H. Wang, S. Feng, T. He, Z. Tan, X. Han, and Y. Tsvetkov. Can language models solve graph problems in natural language? arXiv preprint arXiv:2305.10037, 2023."},{"key":"e_1_2_1_66_1","volume-title":"Graph neural architecture search with gpt-4. arXiv preprint arXiv:2310.01436","author":"Wang H.","year":"2023","unstructured":"H. Wang, Y. Gao, X. Zheng, P. Zhang, H. Chen, and J. Bu. Graph neural architecture search with gpt-4. arXiv preprint arXiv:2310.01436, 2023."},{"key":"e_1_2_1_67_1","volume-title":"On the robustness of chatgpt: An adversarial and out-of-distribution perspective. arXiv preprint arXiv:2302.12095","author":"Wang J.","year":"2023","unstructured":"J. Wang, X. Hu, W. Hou, H. Chen, R. Zheng, Y. Wang, L. Yang, H. Huang, W. Ye, X. Geng, et al. On the robustness of chatgpt: An adversarial and out-of-distribution perspective. arXiv preprint arXiv:2302.12095, 2023."},{"key":"e_1_2_1_68_1","volume-title":"Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533","author":"Wang L.","year":"2022","unstructured":"L. Wang, N. Yang, X. Huang, B. Jiao, L. Yang, D. Jiang, R. Majumder, and F. Wei. Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533, 2022."},{"key":"e_1_2_1_69_1","volume-title":"Deep graph library: Towards efficient and scalable deep learning on graphs. ArXiv, abs\/1909.01315","author":"Wang M.","year":"2019","unstructured":"M. Wang, L. Yu, D. Zheng, Q. Gan, Y. Gai, Z. Ye, M. Li, J. Zhou, Q. Huang, C. Ma, Z. Huang, Q. Guo, H. Zhang, H. Lin, J. J. Zhao, J. Li, A. Smola, and Z. Zhang. Deep graph library: Towards efficient and scalable deep learning on graphs. ArXiv, abs\/1909.01315, 2019."},{"key":"e_1_2_1_70_1","volume-title":"Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903","author":"Wei J.","year":"2022","unstructured":"J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022."},{"key":"e_1_2_1_71_1","volume-title":"Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs\/1910.03771","author":"Wolf T.","year":"2019","unstructured":"T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs\/1910.03771, 2019."},{"key":"e_1_2_1_72_1","volume-title":"Active learning for graph neural networks via node feature propagation. ArXiv, abs\/1910.07567","author":"Wu Y.","year":"2019","unstructured":"Y. Wu, Y. Xu, A. Singh, Y. Yang, and A. W. Dubrawski. Active learning for graph neural networks via node feature propagation. ArXiv, abs\/1910.07567, 2019."},{"key":"e_1_2_1_73_1","doi-asserted-by":"crossref","first-page":"109","DOI":"10.1109\/TAI.2021.3076021","article-title":"Graph learning: A survey","volume":"2","author":"Xia F.","year":"2021","unstructured":"F. Xia, K. Sun, S. Yu, A. Aziz, L. Wan, S. Pan, and H. Liu. Graph learning: A survey. IEEE Transactions on Artificial Intelligence, 2:109--127, 2021.","journal-title":"IEEE Transactions on Artificial Intelligence"},{"key":"e_1_2_1_74_1","volume-title":"Advances in Neural Information Processing Systems","author":"Yang J.","year":"2021","unstructured":"J. Yang, Z. Liu, S. Xiao, C. Li, D. Lian, S. Agrawal,A. S, G. Sun, and X. Xie. Graphformers: GNNnested transformers for representation learning on textual graph. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, 2021."},{"key":"e_1_2_1_75_1","volume-title":"Revisiting semi-supervised learning with graph embeddings. ArXiv, abs\/1603.08861","author":"Yang Z.","year":"2016","unstructured":"Z. Yang, W. W. Cohen, and R. Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. ArXiv, abs\/1603.08861, 2016."},{"key":"e_1_2_1_76_1","volume-title":"Graph convolutional networks for text classification. ArXiv, abs\/1809.05679","author":"Yao L.","year":"2018","unstructured":"L. Yao, C. Mao, and Y. Luo. Graph convolutional networks for text classification. ArXiv, abs\/1809.05679, 2018."},{"key":"e_1_2_1_77_1","volume-title":"Neural Information Processing Systems (NeurIPS)","author":"Yasunaga M.","year":"2022","unstructured":"M. Yasunaga, A. Bosselut, H. Ren, X. Zhang, C. D. Manning, P. Liang, and J. Leskovec. Deep bidirectional language-knowledge graph pretraining. In Neural Information Processing Systems (NeurIPS), 2022."},{"key":"e_1_2_1_78_1","doi-asserted-by":"crossref","first-page":"8003","DOI":"10.18653\/v1\/2022.acl-long.551","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Yasunaga M.","year":"2022","unstructured":"M. Yasunaga, J. Leskovec, and P. Liang. Linkbert: Pretraining language models with document links. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8003--8016, 2022."},{"key":"e_1_2_1_79_1","volume-title":"Natural language is all a graph needs. arXiv preprint arXiv:2308.07134","author":"Ye R.","year":"2023","unstructured":"R. Ye, C. Zhang, R. Wang, S. Xu, and Y. Zhang. Natural language is all a graph needs. arXiv preprint arXiv:2308.07134, 2023."},{"key":"e_1_2_1_80_1","volume-title":"Graph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt. arXiv preprint arXiv:2304.11116","author":"Zhang J.","year":"2023","unstructured":"J. Zhang. Graph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt. arXiv preprint arXiv:2304.11116, 2023."},{"key":"e_1_2_1_81_1","volume-title":"Llm4dyg: Can large language models solve problems on dynamic graphs? arXiv preprint arXiv:2310.17110","author":"Zhang Z.","year":"2023","unstructured":"Z. Zhang, X. Wang, Z. Zhang, H. Li, Y. Qin, S. Wu, and W. Zhu. Llm4dyg: Can large language models solve problems on dynamic graphs? arXiv preprint arXiv:2310.17110, 2023."},{"key":"e_1_2_1_82_1","volume-title":"Automatic chain of thought prompting in large language models. ArXiv, abs\/2210.03493","author":"Zhang Z.","year":"2022","unstructured":"Z. Zhang, A. Zhang, M. Li, and A. J. Smola. Automatic chain of thought prompting in large language models. ArXiv, abs\/2210.03493, 2022."},{"key":"e_1_2_1_83_1","volume-title":"The Eleventh International Conference on Learning Representations","author":"Zhao J.","year":"2023","unstructured":"J. Zhao, M. Qu, C. Li, H. Yan, Q. Liu, R. Li, X. Xie, and J. Tang. Learning on large-scale text-attributed graphs via variational inference. In The Eleventh International Conference on Learning Representations, 2023."},{"key":"e_1_2_1_84_1","volume-title":"Graphtext: Graph reasoning in text space. arXiv preprint arXiv:2310.01089","author":"Zhao J.","year":"2023","unstructured":"J. Zhao, L. Zhuo, Y. Shen, M. Qu, K. Liu, M. Bronstein, Z. Zhu, and J. Tang. Graphtext: Graph reasoning in text space. arXiv preprint arXiv:2310.01089, 2023."},{"key":"e_1_2_1_85_1","volume-title":"rong Wen. A survey of large language models. ArXiv, abs\/2303.18223","author":"Zhao W. X.","year":"2023","unstructured":"W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, P. Liu, J. Nie, and J. rong Wen. A survey of large language models. ArXiv, abs\/2303.18223, 2023."},{"key":"e_1_2_1_86_1","volume-title":"Proceedings of the Web Conference 2021","author":"Zhu J.","year":"2021","unstructured":"J. Zhu, Y. Cui, Y. Liu, H. Sun, X. Li, M. Pelger, L. Zhang, T. Yan, R. Zhang, and H. Zhao. Textgnn: Improving text encoder via graph neural network in sponsored search. Proceedings of the Web Conference 2021, 2021."},{"key":"e_1_2_1_87_1","first-page":"7793","volume-title":"Advances in Neural Information Processing Systems","volume":"33","author":"Zhu J.","year":"2020","unstructured":"J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 7793--7804. Curran Associates, Inc., 2020."}],"container-title":["ACM SIGKDD Explorations Newsletter"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3655103.3655110","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3655103.3655110","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:03:52Z","timestamp":1750291432000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3655103.3655110"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,26]]},"references-count":87,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,3,26]]}},"alternative-id":["10.1145\/3655103.3655110"],"URL":"https:\/\/doi.org\/10.1145\/3655103.3655110","relation":{},"ISSN":["1931-0145","1931-0153"],"issn-type":[{"value":"1931-0145","type":"print"},{"value":"1931-0153","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,26]]},"assertion":[{"value":"2024-03-28","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}