{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,23]],"date-time":"2025-12-23T10:27:45Z","timestamp":1766485665161,"version":"3.37.3"},"reference-count":50,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2023,2,27]],"date-time":"2023-02-27T00:00:00Z","timestamp":1677456000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,2,27]],"date-time":"2023-02-27T00:00:00Z","timestamp":1677456000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["U1701266"],"award-info":[{"award-number":["U1701266"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2023,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Recently, graph contrastive learning (GCL) has achieved remarkable performance in graph representation learning. However, existing GCL methods usually follow a dual-channel encoder network (i.e., Siamese networks), which adds to the complexity of the network architecture. Additionally, these methods overly depend on varied data augmentation techniques, corrupting graph information. Furthermore, they are heavily reliant on large quantities of negative nodes for each object node, which requires tremendous memory costs. To address these issues, we propose a novel and simple graph representation learning framework, named SimGRL. Firstly, our proposed network architecture only contains one encoder based on a graph neural network instead of a dual-channel encoder, which simplifies the network architecture. Then we introduce a distributor to generate triplets to obtain the contrastive views between nodes and their neighbors, avoiding the need for data augmentations. Finally, we design a triplet loss based on adjacency information in graphs that utilizes only one negative node for each object node, reducing memory overhead significantly. Extensive experiments demonstrate that SimGRL achieves competitive performance on node classification and graph classification tasks, especially in terms of running time and memory overhead.<\/jats:p>","DOI":"10.1007\/s40747-023-00997-6","type":"journal-article","created":{"date-parts":[[2023,2,27]],"date-time":"2023-02-27T08:03:35Z","timestamp":1677485015000},"page":"5049-5062","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["SimGRL: a simple self-supervised graph representation learning framework via triplets"],"prefix":"10.1007","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6375-0197","authenticated-orcid":false,"given":"Da","family":"Huang","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2059-8818","authenticated-orcid":false,"given":"Fangyuan","family":"Lei","sequence":"additional","affiliation":[]},{"given":"Xi","family":"Zeng","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,2,27]]},"reference":[{"key":"997_CR1","doi-asserted-by":"publisher","first-page":"170","DOI":"10.1007\/978-3-319-93037-4_14","volume-title":"Pacific-Asia Conference on knowledge discovery and data mining","author":"B Adhikari","year":"2018","unstructured":"Adhikari B, Zhang Y, Ramakrishnan N, Prakash BA (2018) Sub2vec: feature learning for subgraphs. Pacific-Asia Conference on knowledge discovery and data mining. Springer, Berlin, pp 170\u2013182"},{"key":"997_CR2","unstructured":"Battaglia PW, Hamrick JB, Bapst V, Sanchez-Gonzalez A, Zambaldi V, Malinowski M, Tacchetti A, Raposo D, Santoro A, Faulkner R et\u00a0al. (2018) Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261"},{"key":"997_CR3","unstructured":"Borgwardt KM, Kriegel HP (2018) Shortest-path kernels on graphs. In: Fifth IEEE international conference on data mining, pp. 74\u201381. IEEE"},{"key":"997_CR4","unstructured":"Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations. In: International conference on machine learning, pp. 1597\u20131607"},{"key":"997_CR5","doi-asserted-by":"crossref","unstructured":"Chen X, He K (2021) Exploring simple siamese representation learning. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 15750\u201315758","DOI":"10.1109\/CVPR46437.2021.01549"},{"key":"997_CR6","unstructured":"Defferrard M, Bresson X, Vandergheynst P (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 3844\u20133852"},{"key":"997_CR7","unstructured":"Devlin J, Chang MW, Lee K, Toutanova K (2019) Bert: Pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT"},{"key":"997_CR8","doi-asserted-by":"crossref","unstructured":"Dong X, Shen J (2018) Triplet loss in siamese network for object tracking. In: Proceedings of the European conference on computer vision, pp. 459\u2013474","DOI":"10.1007\/978-3-030-01261-8_28"},{"key":"997_CR9","doi-asserted-by":"crossref","unstructured":"Feng Y, Wang H, Hu HR, Yu L, Wang W, Wang S (2020) Triplet distillation for deep face recognition. In: 2020 IEEE International Conference on Image Processing, pp. 808\u2013812. IEEE","DOI":"10.1109\/ICIP40778.2020.9190651"},{"key":"997_CR10","doi-asserted-by":"crossref","unstructured":"Gao T, Yao X, Chen D (2021) Simcse: Simple contrastive learning of sentence embeddings. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event \/ Punta Cana, Dominican Republic, 7-11 November, 2021, pp. 6894\u20136910. Association for Computational Linguistics","DOI":"10.18653\/v1\/2021.emnlp-main.552"},{"key":"997_CR11","doi-asserted-by":"crossref","unstructured":"Grover A, Leskovec J (2016) node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855\u2013864","DOI":"10.1145\/2939672.2939754"},{"key":"997_CR12","unstructured":"Hamilton WL, Ying R, Leskovec J (2017) Inductive representation learning on large graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1025\u20131035"},{"key":"997_CR13","unstructured":"Hjelm RD, Fedorov A, Lavoie-Marchildon S, Grewal K, Bachman P, Trischler A, Bengio Y (2018) Learning deep representations by mutual information estimation and maximization. In: International Conference on Learning Representations"},{"key":"997_CR14","doi-asserted-by":"crossref","unstructured":"Hoffer E, Ailon N (2015) Deep metric learning using triplet network. In: International workshop on similarity-based pattern recognition, pp. 84\u201392. Springer","DOI":"10.1007\/978-3-319-24261-3_7"},{"key":"997_CR15","first-page":"22118","volume":"33","author":"W Hu","year":"2020","unstructured":"Hu W, Fey M, Zitnik M, Dong Y, Ren H, Liu B, Catasta M, Leskovec J (2020) Open graph benchmark: datasets for machine learning on graphs. Adv Neural Inf Process Syst 33:22118\u201322133","journal-title":"Adv Neural Inf Process Syst"},{"key":"997_CR16","first-page":"1","volume":"2","author":"Q Huang","year":"2022","unstructured":"Huang Q, Yamada M, Tian Y, Singh D, Chang Y (2022) Graphlime: Local interpretable model explanations for graph neural networks. IEEE Trans Knowl Data Eng 2:1\u20136","journal-title":"IEEE Trans Knowl Data Eng"},{"key":"997_CR17","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/TKDE.2021.3104155","volume":"2","author":"D Jin","year":"2021","unstructured":"Jin D, Yu Z, Jiao P, Pan S, He D, Wu J, Yu P, Zhang W (2021) A survey of community detection approaches: From statistical modeling to deep learning. IEEE Trans Knowl Data Eng 2:1","journal-title":"IEEE Trans Knowl Data Eng"},{"key":"997_CR18","unstructured":"Kefato ZT, Girdzijauskas S (2021) Self-supervised graph neural networks without explicit negative sampling. In: The International Workshop on Self-Supervised Learning for the Web (SSL\u201921), at WWW\u201921"},{"key":"997_CR19","unstructured":"Kipf TN, Welling M (2016) Variational graph auto-encoders. arXiv preprint arXiv:1611.07308"},{"key":"997_CR20","unstructured":"Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: International Conference on Learning Representations"},{"key":"997_CR21","unstructured":"Kondor R, Pan H (2016) The multiscale laplacian graph kernel. In: Advances in neural information processing systems, pp. 2990\u20132998"},{"key":"997_CR22","unstructured":"Kriege N, Mutzel P (2012) Subgraph matching kernels for attributed graphs. In: International Conference on Machine Learning, pp. 291\u2013298"},{"key":"997_CR23","unstructured":"Kriege NM, Giscard PL, Wilson R (2016) On valid optimal assignment kernels and applications to graph classification. In: Advances in Neural Information Processing Systems, pp. 1623\u20131631"},{"key":"997_CR24","doi-asserted-by":"crossref","unstructured":"Lin H, Fu Y, Lu P, Gong S, Xue X, Jiang YG (2019) Tc-net for isbir: Triplet classification network for instance-level sketch based image retrieval. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 1676\u20131684","DOI":"10.1145\/3343031.3350900"},{"key":"997_CR25","unstructured":"Narayanan A, Chandramohan M, Venkatesan R, Chen L, Liu Y, Jaiswal S (2017) graph2vec: Learning distributed representations of graphs. arXiv preprint arXiv:1707.05005"},{"key":"997_CR26","doi-asserted-by":"publisher","first-page":"943","DOI":"10.1613\/jair.1.13225","volume":"72","author":"G Nikolentzos","year":"2021","unstructured":"Nikolentzos G, Siglidis G, Vazirgiannis M (2021) Graph kernels: a survey. J Artif Intell Res 72:943\u20131027","journal-title":"J Artif Intell Res"},{"key":"997_CR27","doi-asserted-by":"crossref","unstructured":"Pan S, Hu R, Long G, Jiang J, Yao L, Zhang, C (2018) Adversarially regularized graph autoencoder for graph embedding. In: International Joint Conference on Artificial Intelligence","DOI":"10.24963\/ijcai.2018\/362"},{"key":"997_CR28","doi-asserted-by":"crossref","unstructured":"Perozzi B, Al-Rfou R, Skiena S (2014) Deepwalk: Online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701\u2013710","DOI":"10.1145\/2623330.2623732"},{"key":"997_CR29","doi-asserted-by":"crossref","unstructured":"Rhee S, Seo S, Kim S (2018) Hybrid approach of relation network and localized graph convolutional filtering for breast cancer subtype classification. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pp. 3527\u20133534","DOI":"10.24963\/ijcai.2018\/490"},{"issue":"3","key":"997_CR30","first-page":"93","volume":"29","author":"P Sen","year":"2008","unstructured":"Sen P, Namata G, Bilgic M, Getoor L, Galligher B, Eliassi-Rad T (2008) Collective classification in network data. AI Mag 29(3):93\u201393","journal-title":"AI Mag"},{"issue":"9","key":"997_CR31","first-page":"2539","volume":"12","author":"N Shervashidze","year":"2011","unstructured":"Shervashidze N, Schweitzer P, Van Leeuwen EJ, Mehlhorn K, Borgwardt KM (2011) Weisfeiler\u2013lehman graph kernels. J Mach Learn Res 12(9):2539\u20132561","journal-title":"J Mach Learn Res"},{"key":"997_CR32","unstructured":"Shervashidze N, Vishwanathan S, Petri T, Mehlhorn K, Borgwardt K (2009) Efficient graphlet kernels for large graph comparison. In: Artificial intelligence and statistics, pp. 488\u2013495. PMLR"},{"key":"997_CR33","unstructured":"Sun FY, Hoffman J, Verma V, Tang J (2020) Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In: International Conference on Learning Representations"},{"key":"997_CR34","unstructured":"Thakoor S, Tallec C, Azar MG, Munos R, Veli\u010dkovi\u0107 P, Valko M (2021) Bootstrapped representation learning on graphs"},{"key":"997_CR35","doi-asserted-by":"crossref","unstructured":"Thomas G, Flach P, Stefan W (2003) On graph kernels: Hardness results and efficient alternatives. In: Proceedings of the 16th Annual Conference on Computational Learning Theory and 7th Kernel Workshop, pp. 129\u2013143","DOI":"10.1007\/978-3-540-45167-9_11"},{"key":"997_CR36","unstructured":"Veli\u010dkovi\u0107 P, Cucurull G, Casanova A, Romero A, Li\u00f3 P, Bengio Y (2018) Graph attention networks. In: International Conference on Learning Representations"},{"key":"997_CR37","unstructured":"Veli\u010dkovi\u0107 P, Fedus W, Hamilton W.L, Li\u00f3 P, Bengio Y, Hjelm RD (2019) Deep graph infomax. In: International Conference on Learning Representations, p.\u00a04"},{"key":"997_CR38","unstructured":"Wang C, Liu Z (2021) Learning graph representation by aggregating subgraphs via mutual information maximization. arXiv preprint arXiv:2103.13125"},{"key":"997_CR39","doi-asserted-by":"crossref","unstructured":"Wang D, Zhang Z, Zhou J, Cui P, Fang J, Jia Q, FangY, Qi Y (2021) Temporal-aware graph neural network for credit risk prediction. In: Proceedings of the 2021 SIAM International Conference on Data Mining (SDM), pp. 702\u2013710. SIAM","DOI":"10.1137\/1.9781611976700.79"},{"key":"997_CR40","unstructured":"Wu F, Souza A, Zhang T, Fifty C, Yu T, Weinberger K (2019) Simplifying graph convolutional networks. In: International conference on machine learning, pp. 6861\u20136871. PMLR"},{"issue":"1","key":"997_CR41","doi-asserted-by":"publisher","first-page":"4","DOI":"10.1109\/TNNLS.2020.2978386","volume":"32","author":"Z Wu","year":"2020","unstructured":"Wu Z, Pan S, Chen F, Long G, Zhang C, Philip SY (2020) A comprehensive survey on graph neural networks. IEEE Trans Neural Netw Learn Syst 32(1):4\u201324","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"997_CR42","unstructured":"Xu K, Hu W, Leskovec J, Jegelka S (2018) How powerful are graph neural networks? In: International Conference on Learning Representations"},{"key":"997_CR43","doi-asserted-by":"crossref","unstructured":"Yanardag P, Vishwanathan S (2015) Deep graph kernels. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1365\u20131374","DOI":"10.1145\/2783258.2783417"},{"key":"997_CR44","unstructured":"Yang Z, Cohen W, Salakhudinov R (2016) Revisiting semi-supervised learning with graph embeddings. In: International conference on machine learning, pp. 40\u201348. PMLR"},{"key":"997_CR45","first-page":"7370","volume":"33","author":"L Yao","year":"2019","unstructured":"Yao L, Mao C, Luo Y (2019) Graph convolutional networks for text classification. Proc AAAI Conf Artif Intell 33:7370\u20137377","journal-title":"Proc AAAI Conf Artif Intell"},{"key":"997_CR46","unstructured":"You J, Ying R, Leskovec J (2019) Position-aware graph neural networks. In: International conference on machine learning, pp. 7134\u20137143. PMLR"},{"key":"997_CR47","first-page":"5812","volume":"33","author":"Y You","year":"2020","unstructured":"You Y, Chen T, Sui Y, Chen T, Wang Z, Shen Y (2020) Graph contrastive learning with augmentations. Adv Neural Inf Process Syst 33:5812\u20135823","journal-title":"Adv Neural Inf Process Syst"},{"key":"997_CR48","first-page":"1","volume":"2","author":"D Zhao","year":"2021","unstructured":"Zhao D, Chen C, Li D (2021) Multi-stage attention and center triplet loss for person re-identication. Appl Intell 2:1\u201313","journal-title":"Appl Intell"},{"key":"997_CR49","unstructured":"Zhu Y, Xu Y, Yu F, Liu, Q, Wu, S, Wang, L (2020) Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131"},{"key":"997_CR50","first-page":"2069","volume":"2021","author":"Y Zhu","year":"2021","unstructured":"Zhu Y, Xu Y, Yu F, Liu Q, Wu S, Wang L (2021) Graph contrastive learning with adaptive augmentation. Proc Web Conf 2021:2069\u20132080","journal-title":"Proc Web Conf"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-00997-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-00997-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-00997-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,9,22]],"date-time":"2023-09-22T17:15:29Z","timestamp":1695402929000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-00997-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,27]]},"references-count":50,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2023,10]]}},"alternative-id":["997"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-00997-6","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2023,2,27]]},"assertion":[{"value":"20 September 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 February 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 February 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}