{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,4]],"date-time":"2026-04-04T18:16:11Z","timestamp":1775326571869,"version":"3.50.1"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:p>Graph Neural Networks (GNNs) have shown great success in various graph-based learning tasks. However, it often faces the issue of over-smoothing as the model depth increases, which causes all node representations to converge to a single value and become indistinguishable. This issue stems from the inherent limitations of GNNs, which struggle to distinguish the importance of information from different neighborhoods. In this paper, we introduce MbaGCN, a novel graph convolutional architecture that draws inspiration from the Mamba paradigm\u2014originally designed for sequence modeling. MbaGCN presents a new backbone for GNNs, consisting of three key components: the Message Aggregation Layer, the Selective State Space Transition Layer, and the Node State Prediction Layer. These components work in tandem to adaptively aggregate neighborhood information, providing greater flexibility and scalability for deep GNN models. While MbaGCN may not consistently outperform all existing methods on each dataset, it provides a foundational framework that demonstrates the effective integration of the Mamba paradigm into graph representation learning. Through extensive experiments on benchmark datasets, we demonstrate that MbaGCN paves the way for future advancements in graph neural network research. Our code is in https:\/\/github.com\/hexin5515\/MbaGCN.<\/jats:p>","DOI":"10.24963\/ijcai.2025\/595","type":"proceedings-article","created":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T08:10:40Z","timestamp":1758269440000},"page":"5345-5353","source":"Crossref","is-referenced-by-count":2,"title":["Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space"],"prefix":"10.24963","author":[{"given":"Xin","family":"He","sequence":"first","affiliation":[{"name":"Jilin university"}]},{"given":"Yili","family":"Wang","sequence":"additional","affiliation":[{"name":"Jilin university"}]},{"given":"Wenqi","family":"Fan","sequence":"additional","affiliation":[{"name":"The Hong Kong Polytechnic University"}]},{"given":"Xu","family":"Shen","sequence":"additional","affiliation":[{"name":"Jilin university"}]},{"given":"Xin","family":"Juan","sequence":"additional","affiliation":[{"name":"Jilin university"}]},{"given":"Rui","family":"Miao","sequence":"additional","affiliation":[{"name":"Jilin university"}]},{"given":"Xin","family":"Wang","sequence":"additional","affiliation":[{"name":"Jilin university"}]}],"member":"10584","event":{"name":"Thirty-Fourth International Joint Conference on Artificial Intelligence {IJCAI-25}","theme":"Artificial Intelligence","location":"Montreal, Canada","acronym":"IJCAI-2025","number":"34","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"start":{"date-parts":[[2025,8,16]]},"end":{"date-parts":[[2025,8,22]]}},"container-title":["Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2025,9,23]],"date-time":"2025-09-23T11:34:33Z","timestamp":1758627273000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2025\/595"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2025,9]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2025\/595","relation":{},"subject":[],"published":{"date-parts":[[2025,9]]}}}