{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,7]],"date-time":"2026-03-07T17:48:17Z","timestamp":1772905697512,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":72,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,8,14]],"date-time":"2021-08-14T00:00:00Z","timestamp":1628899200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000183","name":"Army Research Office","doi-asserted-by":"publisher","award":["No. W911NF2110182"],"award-info":[{"award-number":["No. W911NF2110182"]}],"id":[{"id":"10.13039\/100000183","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["No. 1937786 and 1937787"],"award-info":[{"award-number":["No. 1937786 and 1937787"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,8,14]]},"DOI":"10.1145\/3447548.3467295","type":"proceedings-article","created":{"date-parts":[[2021,8,13]],"date-time":"2021-08-13T18:21:39Z","timestamp":1628878899000},"page":"1645-1653","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":45,"title":["Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation"],"prefix":"10.1145","author":[{"given":"Binghui","family":"Wang","sequence":"first","affiliation":[{"name":"Duke University, Durham, NC, USA"}]},{"given":"Jinyuan","family":"Jia","sequence":"additional","affiliation":[{"name":"Duke University, Durham, NC, USA"}]},{"given":"Xiaoyu","family":"Cao","sequence":"additional","affiliation":[{"name":"Duke University, Durham, NC, USA"}]},{"given":"Neil Zhenqiang","family":"Gong","sequence":"additional","affiliation":[{"name":"Duke University, Durham, NC, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,8,14]]},"reference":[{"key":"e_1_3_2_2_1_1","unstructured":"Aleksandar Bojchevski and Stephan G\u00fcnnemann. 2019 a. Adversarial Attacks on Node Embeddings via Graph Poisoning. In ICML .  Aleksandar Bojchevski and Stephan G\u00fcnnemann. 2019 a. Adversarial Attacks on Node Embeddings via Graph Poisoning. In ICML ."},{"key":"e_1_3_2_2_2_1","unstructured":"Aleksandar Bojchevski and Stephan G\u00fcnnemann. 2019 b. Certifiable Robustness to Graph Perturbations. In NeurIPS .  Aleksandar Bojchevski and Stephan G\u00fcnnemann. 2019 b. Certifiable Robustness to Graph Perturbations. In NeurIPS ."},{"key":"e_1_3_2_2_3_1","unstructured":"Aleksandar Bojchevski Johannes Klicpera and Stephan G\u00fcnnemann. 2020. Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs images and more. In ICML .  Aleksandar Bojchevski Johannes Klicpera and Stephan G\u00fcnnemann. 2020. Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs images and more. In ICML ."},{"key":"e_1_3_2_2_4_1","unstructured":"Rudy R Bunel Ilker Turkaslan Philip Torr Pushmeet Kohli and Pawan K Mudigonda. 2018. A unified view of piecewise linear neural network verification. In NeurIPS .  Rudy R Bunel Ilker Turkaslan Philip Torr Pushmeet Kohli and Pawan K Mudigonda. 2018. A unified view of piecewise linear neural network verification. In NeurIPS ."},{"key":"e_1_3_2_2_5_1","unstructured":"Xiaoyu Cao and Neil Zhenqiang Gong. 2017. Mitigating evasion attacks to deep neural networks via region-based classification. In ACSAC .  Xiaoyu Cao and Neil Zhenqiang Gong. 2017. Mitigating evasion attacks to deep neural networks via region-based classification. In ACSAC ."},{"key":"e_1_3_2_2_6_1","volume-title":"Provably minimally-distorted adversarial examples. arXiv","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini , Guy Katz , Clark Barrett , and David L Dill . 2017. Provably minimally-distorted adversarial examples. arXiv ( 2017 ). Nicholas Carlini, Guy Katz, Clark Barrett, and David L Dill. 2017. Provably minimally-distorted adversarial examples. arXiv (2017)."},{"key":"e_1_3_2_2_7_1","doi-asserted-by":"crossref","unstructured":"Heng Chang Yu Rong Tingyang Xu Wenbing Huang Honglei Zhang Peng Cui Wenwu Zhu and Junzhou Huang. 2020. A restricted black-box adversarial framework towards attacking graph embedding models. In AAAI .  Heng Chang Yu Rong Tingyang Xu Wenbing Huang Honglei Zhang Peng Cui Wenwu Zhu and Junzhou Huang. 2020. A restricted black-box adversarial framework towards attacking graph embedding models. In AAAI .","DOI":"10.1609\/aaai.v34i04.5741"},{"key":"e_1_3_2_2_8_1","doi-asserted-by":"crossref","unstructured":"Chih-Hong Cheng Georg N\u00fchrenberg and Harald Ruess. 2017. Maximum resilience of artificial neural networks. In ATVA .  Chih-Hong Cheng Georg N\u00fchrenberg and Harald Ruess. 2017. Maximum resilience of artificial neural networks. In ATVA .","DOI":"10.1007\/978-3-319-68167-2_18"},{"key":"e_1_3_2_2_9_1","volume":"201","author":"Cohen Jeremy M","unstructured":"Jeremy M Cohen , Elan Rosenfeld , and J Zico Kolter. 201 9. Certified adversarial robustness via randomized smoothing. In ICML . Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. 2019. Certified adversarial robustness via randomized smoothing. In ICML .","journal-title":"J Zico Kolter."},{"key":"e_1_3_2_2_10_1","unstructured":"Hanjun Dai Hui Li Tian Tian Xin Huang Lin Wang Jun Zhu and Le Song. 2018. Adversarial attack on graph structured data. In ICML .  Hanjun Dai Hui Li Tian Tian Xin Huang Lin Wang Jun Zhu and Le Song. 2018. Adversarial attack on graph structured data. In ICML ."},{"key":"e_1_3_2_2_11_1","unstructured":"Krishnamurthy Dvijotham Sven Gowal Robert Stanforth and etal 2018a. Training verified learners with learned verifiers. arXiv (2018).  Krishnamurthy Dvijotham Sven Gowal Robert Stanforth and et al. 2018a. Training verified learners with learned verifiers. arXiv (2018)."},{"key":"e_1_3_2_2_12_1","unstructured":"Krishnamurthy Dvijotham Robert Stanforth Sven Gowal and etal 2018b. A Dual Approach to Scalable Verification of Deep Networks.. In UAI .  Krishnamurthy Dvijotham Robert Stanforth Sven Gowal and et al. 2018b. A Dual Approach to Scalable Verification of Deep Networks.. In UAI ."},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"crossref","unstructured":"Ruediger Ehlers. 2017. Formal verification of piece-wise linear feed-forward neural networks. In ATVA .  Ruediger Ehlers. 2017. Formal verification of piece-wise linear feed-forward neural networks. In ATVA .","DOI":"10.1007\/978-3-319-68167-2_19"},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"crossref","unstructured":"Negin Entezari Saba A Al-Sayouri Amirali Darvishzadeh and Evangelos E Papalexakis. 2020. All You Need Is Low (Rank) Defending Against Adversarial Attacks on Graphs. In WSDM .  Negin Entezari Saba A Al-Sayouri Amirali Darvishzadeh and Evangelos E Papalexakis. 2020. All You Need Is Low (Rank) Defending Against Adversarial Attacks on Graphs. In WSDM .","DOI":"10.1145\/3336191.3371789"},{"key":"e_1_3_2_2_15_1","volume-title":"Deep neural networks and mixed integer linear optimization. Constraints","author":"Fischetti Matteo","year":"2018","unstructured":"Matteo Fischetti and Jason Jo. 2018. Deep neural networks and mixed integer linear optimization. Constraints ( 2018 ). Matteo Fischetti and Jason Jo. 2018. Deep neural networks and mixed integer linear optimization. Constraints (2018)."},{"key":"e_1_3_2_2_16_1","volume-title":"Ai2: Safety and robustness certification of neural networks with abstract interpretation","author":"Gehr Timon","unstructured":"Timon Gehr , Matthew Mirman , Dana Drachsler-Cohen , Petar Tsankov , Swarat Chaudhuri , and Martin Vechev . 2018. Ai2: Safety and robustness certification of neural networks with abstract interpretation . In IEEE S & P . Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. 2018. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In IEEE S & P ."},{"key":"e_1_3_2_2_17_1","unstructured":"Justin Gilmer Samuel S Schoenholz Patrick F Riley Oriol Vinyals and George E Dahl. 2017. Neural message passing for quantum chemistry. In ICML .  Justin Gilmer Samuel S Schoenholz Patrick F Riley Oriol Vinyals and George E Dahl. 2017. Neural message passing for quantum chemistry. In ICML ."},{"key":"e_1_3_2_2_18_1","volume-title":"Sybilbelief: A semi-supervised learning approach for structure-based sybil detection","author":"Gong Neil Zhenqiang","year":"2014","unstructured":"Neil Zhenqiang Gong , Mario Frank , and Prateek Mittal . 2014 . Sybilbelief: A semi-supervised learning approach for structure-based sybil detection . IEEE TIFS ( 2014). Neil Zhenqiang Gong, Mario Frank, and Prateek Mittal. 2014. Sybilbelief: A semi-supervised learning approach for structure-based sybil detection. IEEE TIFS (2014)."},{"key":"e_1_3_2_2_19_1","volume-title":"USENIX Security Symposium .","author":"Gong Neil Zhenqiang","year":"2016","unstructured":"Neil Zhenqiang Gong and Bin Liu . 2016 . You are who you know and how you behave: Attribute inference attacks via users' social friends and behaviors . In USENIX Security Symposium . Neil Zhenqiang Gong and Bin Liu. 2016. You are who you know and how you behave: Attribute inference attacks via users' social friends and behaviors. In USENIX Security Symposium ."},{"key":"e_1_3_2_2_20_1","unstructured":"Will Hamilton Zhitao Ying and Jure Leskovec. 2017. Inductive representation learning on large graphs. In NIPS .  Will Hamilton Zhitao Ying and Jure Leskovec. 2017. Inductive representation learning on large graphs. In NIPS ."},{"key":"e_1_3_2_2_21_1","unstructured":"Jinyuan Jia Xiaoyu Cao Binghui Wang and Neil Zhenqiang Gong. 2020 a. Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing. In ICLR .  Jinyuan Jia Xiaoyu Cao Binghui Wang and Neil Zhenqiang Gong. 2020 a. Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing. In ICLR ."},{"key":"e_1_3_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3366423.3380029"},{"key":"e_1_3_2_2_23_1","unstructured":"Jinyuan Jia Binghui Wang Le Zhang and Neil Zhenqiang Gong. 2017. AttriInfer: Inferring user attributes in online social networks using markov random fields. In WWW .  Jinyuan Jia Binghui Wang Le Zhang and Neil Zhenqiang Gong. 2017. AttriInfer: Inferring user attributes in online social networks using markov random fields. In WWW ."},{"key":"e_1_3_2_2_24_1","volume-title":"Venkata Jaya Shankar Ashish Peruri, and Xinhua Zhang","author":"Jin Hongwei","year":"2020","unstructured":"Hongwei Jin , Zhan Shi , Venkata Jaya Shankar Ashish Peruri, and Xinhua Zhang . 2020 . Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks. In NeurIPS . Hongwei Jin, Zhan Shi, Venkata Jaya Shankar Ashish Peruri, and Xinhua Zhang. 2020. Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks. In NeurIPS ."},{"key":"e_1_3_2_2_25_1","volume-title":"Reluplex: An efficient SMT solver for verifying deep neural networks. In CAV .","author":"Katz Guy","year":"2017","unstructured":"Guy Katz , Clark Barrett , David L Dill , and 2017 . Reluplex: An efficient SMT solver for verifying deep neural networks. In CAV . Guy Katz, Clark Barrett, David L Dill, and et al. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In CAV ."},{"key":"e_1_3_2_2_26_1","unstructured":"Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In ICLR .  Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In ICLR ."},{"key":"e_1_3_2_2_27_1","unstructured":"Johannes Klicpera Aleksandar Bojchevski and Stephan G\u00fcnnemann. 2019. Predict then propagate: Graph neural networks meet pagerank. In ICLR .  Johannes Klicpera Aleksandar Bojchevski and Stephan G\u00fcnnemann. 2019. Predict then propagate: Graph neural networks meet pagerank. In ICLR ."},{"key":"e_1_3_2_2_28_1","volume-title":"Certified robustness to adversarial examples with differential privacy","author":"Lecuyer Mathias","unstructured":"Mathias Lecuyer , Vaggelis Atlidakis , Roxana Geambasu , Daniel Hsu , and Suman Jana . 2019. Certified robustness to adversarial examples with differential privacy . In IEEE S & P . Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. 2019. Certified robustness to adversarial examples with differential privacy. In IEEE S & P ."},{"key":"e_1_3_2_2_29_1","unstructured":"GuangHe Lee Yang Yuan Shiyu Chang and Tommi Jaakkola. 2019. Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers. In NeurIPS .  GuangHe Lee Yang Yuan Shiyu Chang and Tommi Jaakkola. 2019. Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers. In NeurIPS ."},{"key":"e_1_3_2_2_30_1","volume-title":"Testing statistical hypotheses","author":"Lehmann Erich L","unstructured":"Erich L Lehmann and Joseph P Romano . 2006. Testing statistical hypotheses . Springer Science & Business Media . Erich L Lehmann and Joseph P Romano. 2006. Testing statistical hypotheses .Springer Science & Business Media."},{"key":"e_1_3_2_2_31_1","doi-asserted-by":"crossref","unstructured":"Alexander Levine and Soheil Feizi. 2020. Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation. In AAAI .  Alexander Levine and Soheil Feizi. 2020. Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation. In AAAI .","DOI":"10.1609\/aaai.v34i04.5888"},{"key":"e_1_3_2_2_32_1","unstructured":"Bai Li Changyou Chen Wenlin Wang and Lawrence Carin. 2019. Certified Adversarial Robustness with Additive Noise. NeurIPS .  Bai Li Changyou Chen Wenlin Wang and Lawrence Carin. 2019. Certified Adversarial Robustness with Additive Noise. NeurIPS ."},{"key":"e_1_3_2_2_33_1","unstructured":"Xuanqing Liu Minhao Cheng Huan Zhang and Cho-Jui Hsieh. 2018. Towards robust neural networks via random self-ensemble. In ECCV .  Xuanqing Liu Minhao Cheng Huan Zhang and Cho-Jui Hsieh. 2018. Towards robust neural networks via random self-ensemble. In ECCV ."},{"key":"e_1_3_2_2_34_1","unstructured":"Matthew Mirman Timon Gehr and Martin Vechev. 2018. Differentiable abstract interpretation for provably robust neural networks. In ICML .  Matthew Mirman Timon Gehr and Martin Vechev. 2018. Differentiable abstract interpretation for provably robust neural networks. In ICML ."},{"key":"e_1_3_2_2_35_1","doi-asserted-by":"crossref","unstructured":"Alan Mislove Bimal Viswanath Krishna P Gummadi and Peter Druschel. 2010. You are who you know: inferring user profiles in online social networks. In WSDM .  Alan Mislove Bimal Viswanath Krishna P Gummadi and Peter Druschel. 2010. You are who you know: inferring user profiles in online social networks. In WSDM .","DOI":"10.1145\/1718487.1718519"},{"key":"e_1_3_2_2_36_1","unstructured":"Jerzy Neyman and Egon Sharpe Pearson. 1933. IX. On the problem of the most efficient tests of statistical hypotheses. (1933).  Jerzy Neyman and Egon Sharpe Pearson. 1933. IX. On the problem of the most efficient tests of statistical hypotheses. (1933)."},{"key":"e_1_3_2_2_37_1","doi-asserted-by":"crossref","unstructured":"Shashank Pandit Horng Chau Samuel Wang and Christos Faloutsos. 2007. Netprobe: a fast and scalable system for fraud detection in online auction networks. In WWW .  Shashank Pandit Horng Chau Samuel Wang and Christos Faloutsos. 2007. Netprobe: a fast and scalable system for fraud detection in online auction networks. In WWW .","DOI":"10.1145\/1242572.1242600"},{"key":"e_1_3_2_2_38_1","doi-asserted-by":"crossref","unstructured":"Judea Pearl. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference .Morgan Kaufmann.  Judea Pearl. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference .Morgan Kaufmann.","DOI":"10.1016\/B978-0-08-051489-5.50008-4"},{"key":"e_1_3_2_2_39_1","unstructured":"Aditi Raghunathan Jacob Steinhardt and Percy Liang. 2018a. Certified defenses against adversarial examples. In ICLR .  Aditi Raghunathan Jacob Steinhardt and Percy Liang. 2018a. Certified defenses against adversarial examples. In ICLR ."},{"key":"e_1_3_2_2_40_1","unstructured":"Aditi Raghunathan Jacob Steinhardt and Percy S Liang. 2018b. Semidefinite relaxations for certifying robustness to adversarial examples. In NeurIPS .  Aditi Raghunathan Jacob Steinhardt and Percy S Liang. 2018b. Semidefinite relaxations for certifying robustness to adversarial examples. In NeurIPS ."},{"key":"e_1_3_2_2_41_1","unstructured":"Hadi Salman Jerry Li Ilya Razenshteyn Pengchuan Zhang Huan Zhang Sebastien Bubeck and Greg Yang. 2019. Provably robust deep learning via adversarially trained smoothed classifiers. In NeurIPS .  Hadi Salman Jerry Li Ilya Razenshteyn Pengchuan Zhang Huan Zhang Sebastien Bubeck and Greg Yang. 2019. Provably robust deep learning via adversarially trained smoothed classifiers. In NeurIPS ."},{"key":"e_1_3_2_2_42_1","unstructured":"Karsten Scheibler Leonore Winterer Ralf Wimmer and Bernd Becker. 2015. Towards Verification of Artificial Neural Networks.. In MBMV .  Karsten Scheibler Leonore Winterer Ralf Wimmer and Bernd Becker. 2015. Towards Verification of Artificial Neural Networks.. In MBMV ."},{"key":"e_1_3_2_2_43_1","unstructured":"Prithviraj Sen Galileo Namata Mustafa Bilgic and etal 2008. Collective classification in network data. AI magazine (2008).  Prithviraj Sen Galileo Namata Mustafa Bilgic and et al. 2008. Collective classification in network data. AI magazine (2008)."},{"key":"e_1_3_2_2_44_1","unstructured":"Gagandeep Singh Timon Gehr Matthew Mirman Markus P\u00fcschel and Martin Vechev. 2018. Fast and effective robustness certification. In NeurIPS .  Gagandeep Singh Timon Gehr Matthew Mirman Markus P\u00fcschel and Martin Vechev. 2018. Fast and effective robustness certification. In NeurIPS ."},{"key":"e_1_3_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/3366423.3380149"},{"key":"e_1_3_2_2_46_1","doi-asserted-by":"crossref","unstructured":"Acar Tamersoy Kevin Roundy and Duen Horng Chau. 2014. Guilt by association: large scale malware detection by mining file-relation graphs. In KDD .  Acar Tamersoy Kevin Roundy and Duen Horng Chau. 2014. Guilt by association: large scale malware detection by mining file-relation graphs. In KDD .","DOI":"10.1145\/2623330.2623342"},{"key":"e_1_3_2_2_47_1","doi-asserted-by":"crossref","unstructured":"Xianfeng Tang Yandong Li Yiwei Sun Huaxiu Yao Prasenjit Mitra and Suhang Wang. 2020. Transferring Robustness for Graph Neural Network Against Poisoning Attacks. In WSDM .  Xianfeng Tang Yandong Li Yiwei Sun Huaxiu Yao Prasenjit Mitra and Suhang Wang. 2020. Transferring Robustness for Graph Neural Network Against Poisoning Attacks. In WSDM .","DOI":"10.1145\/3336191.3371851"},{"key":"e_1_3_2_2_48_1","unstructured":"Shuchang Tao Huawei Shen Qi Cao Liang Hou and Xueqi Cheng. 2021. Adversarial Immunization for Certifiable Robustness on Graphs. In WSDM .  Shuchang Tao Huawei Shen Qi Cao Liang Hou and Xueqi Cheng. 2021. Adversarial Immunization for Certifiable Robustness on Graphs. In WSDM ."},{"key":"e_1_3_2_2_49_1","unstructured":"Petar Velivc kovi\u0107 Guillem Cucurull Arantxa Casanova Adriana Romero Pietro Lio and Yoshua Bengio. 2018. Graph attention networks. In ICLR .  Petar Velivc kovi\u0107 Guillem Cucurull Arantxa Casanova Adriana Romero Pietro Lio and Yoshua Bengio. 2018. Graph attention networks. In ICLR ."},{"key":"e_1_3_2_2_50_1","doi-asserted-by":"crossref","unstructured":"Binghui Wang and Neil Zhenqiang Gong. 2019. Attacking Graph-based Classification via Manipulating the Graph Structure. In CCS .  Binghui Wang and Neil Zhenqiang Gong. 2019. Attacking Graph-based Classification via Manipulating the Graph Structure. In CCS .","DOI":"10.1145\/3319535.3354206"},{"key":"e_1_3_2_2_51_1","volume-title":"Neil Zhenqiang Gong, and Hao Fu","author":"Wang Binghui","year":"2017","unstructured":"Binghui Wang , Neil Zhenqiang Gong, and Hao Fu . 2017 a. GANG : Detecting fraudulent users in online social networks via guilt-by-association on directed graphs. In ICDM . Binghui Wang, Neil Zhenqiang Gong, and Hao Fu. 2017a. GANG: Detecting fraudulent users in online social networks via guilt-by-association on directed graphs. In ICDM ."},{"key":"e_1_3_2_2_52_1","doi-asserted-by":"crossref","unstructured":"Binghui Wang Jinyuan Jia and Neil Zhenqiang Gong. 2019. Graph-based security and privacy analytics via collective classification with joint weight learning and propagation. In NDSS .  Binghui Wang Jinyuan Jia and Neil Zhenqiang Gong. 2019. Graph-based security and privacy analytics via collective classification with joint weight learning and propagation. In NDSS .","DOI":"10.14722\/ndss.2019.23226"},{"key":"e_1_3_2_2_53_1","volume-title":"Structure-based sybil detection in social networks via local rule-based propagation","author":"Wang Binghui","year":"2018","unstructured":"Binghui Wang , Jinyuan Jia , Le Zhang , and Neil Zhenqiang Gong . 2018. Structure-based sybil detection in social networks via local rule-based propagation . IEEE TNSE ( 2018 ). Binghui Wang, Jinyuan Jia, Le Zhang, and Neil Zhenqiang Gong. 2018. Structure-based sybil detection in social networks via local rule-based propagation. IEEE TNSE (2018)."},{"key":"e_1_3_2_2_54_1","doi-asserted-by":"crossref","unstructured":"Binghui Wang Le Zhang and Neil Zhenqiang Gong. 2017b. SybilSCAR: Sybil detection in online social networks via local rule based propagation. In INFOCOM .  Binghui Wang Le Zhang and Neil Zhenqiang Gong. 2017b. SybilSCAR: Sybil detection in online social networks via local rule based propagation. In INFOCOM .","DOI":"10.1109\/INFOCOM.2017.8057066"},{"key":"e_1_3_2_2_55_1","volume-title":"Anti-Money Laundering in Bitcoin: Experimenting with Graph Convolutional Networks for Financial Forensics. In KDD Workshop .","author":"Weber Mark","unstructured":"Mark Weber , Giacomo Domeniconi , Jie Chen , and et al. 2019 . Anti-Money Laundering in Bitcoin: Experimenting with Graph Convolutional Networks for Financial Forensics. In KDD Workshop . Mark Weber, Giacomo Domeniconi, Jie Chen, and et al. 2019. Anti-Money Laundering in Bitcoin: Experimenting with Graph Convolutional Networks for Financial Forensics. In KDD Workshop ."},{"key":"e_1_3_2_2_56_1","unstructured":"Tsui-Wei Weng Huan Zhang Hongge Chen Zhao Song Cho-Jui Hsieh Duane Boning Inderjit S Dhillon and Luca Daniel. 2018. Towards fast computation of certified robustness for relu networks. In ICML .  Tsui-Wei Weng Huan Zhang Hongge Chen Zhao Song Cho-Jui Hsieh Duane Boning Inderjit S Dhillon and Luca Daniel. 2018. Towards fast computation of certified robustness for relu networks. In ICML ."},{"key":"e_1_3_2_2_57_1","volume":"201","author":"Wong Eric","unstructured":"Eric Wong and J Zico Kolter. 201 8. Provable defenses against adversarial examples via the convex outer adversarial polytope. In ICML . Eric Wong and J Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In ICML .","journal-title":"J Zico Kolter."},{"key":"e_1_3_2_2_58_1","volume":"201","author":"Wong Eric","unstructured":"Eric Wong , Frank Schmidt , Jan Hendrik Metzen , and J Zico Kolter. 201 8. Scaling provable adversarial defenses. In NeurIPS . Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J Zico Kolter. 2018. Scaling provable adversarial defenses. In NeurIPS .","journal-title":"J Zico Kolter."},{"key":"e_1_3_2_2_59_1","unstructured":"Huijun Wu Chen Wang Yuriy Tyshetskiy Andrew Docherty Kai Lu and Liming Zhu. 2019. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense. In IJCAI .  Huijun Wu Chen Wang Yuriy Tyshetskiy Andrew Docherty Kai Lu and Liming Zhu. 2019. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense. In IJCAI ."},{"key":"e_1_3_2_2_60_1","unstructured":"Kaidi Xu Hongge Chen Sijia Liu Pin-Yu Chen Tsui-Wei Weng Mingyi Hong and Xue Lin. 2019 a. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective. In IJCAI .  Kaidi Xu Hongge Chen Sijia Liu Pin-Yu Chen Tsui-Wei Weng Mingyi Hong and Xue Lin. 2019 a. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective. In IJCAI ."},{"key":"e_1_3_2_2_61_1","unstructured":"Keyulu Xu Weihua Hu Jure Leskovec and Stefanie Jegelka. 2019 b. How powerful are graph neural networks?. In ICLR .  Keyulu Xu Weihua Hu Jure Leskovec and Stefanie Jegelka. 2019 b. How powerful are graph neural networks?. In ICLR ."},{"key":"e_1_3_2_2_62_1","unstructured":"Keyulu Xu Chengtao Li Yonglong Tian Tomohiro Sonobe Ken-ichi Kawarabayashi and Stefanie Jegelka. 2018. Representation learning on graphs with jumping knowledge networks. In ICML .  Keyulu Xu Chengtao Li Yonglong Tian Tomohiro Sonobe Ken-ichi Kawarabayashi and Stefanie Jegelka. 2018. Representation learning on graphs with jumping knowledge networks. In ICML ."},{"key":"e_1_3_2_2_63_1","doi-asserted-by":"crossref","unstructured":"Pinar Yanardag and SVN Vishwanathan. 2015. Deep graph kernels. In KDD .  Pinar Yanardag and SVN Vishwanathan. 2015. Deep graph kernels. In KDD .","DOI":"10.1145\/2783258.2783417"},{"key":"e_1_3_2_2_64_1","volume-title":"MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius. In ICLR .","author":"Zhai Runtian","year":"2020","unstructured":"Runtian Zhai , Chen Dan , Di He , Huan Zhang , Boqing Gong , Pradeep Ravikumar , Cho-Jui Hsieh , and Liwei Wang . 2020 . MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius. In ICLR . Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, and Liwei Wang. 2020. MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius. In ICLR ."},{"key":"e_1_3_2_2_65_1","unstructured":"Huan Zhang Tsui-Wei Weng Pin-Yu Chen Cho-Jui Hsieh and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. In NeurIPS .  Huan Zhang Tsui-Wei Weng Pin-Yu Chen Cho-Jui Hsieh and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. In NeurIPS ."},{"key":"e_1_3_2_2_66_1","doi-asserted-by":"crossref","unstructured":"Elena Zheleva and Lise Getoor. 2009. To join or not to join: the illusion of privacy in social networks with mixed public and private user profiles. In WWW .  Elena Zheleva and Lise Getoor. 2009. To join or not to join: the illusion of privacy in social networks with mixed public and private user profiles. In WWW .","DOI":"10.1145\/1526709.1526781"},{"key":"e_1_3_2_2_67_1","unstructured":"Dingyuan Zhu Ziwei Zhang Peng Cui and Wenwu Zhu. 2019. Robust Graph Convolutional Networks Against Adversarial Attacks. In KDD .  Dingyuan Zhu Ziwei Zhang Peng Cui and Wenwu Zhu. 2019. Robust Graph Convolutional Networks Against Adversarial Attacks. In KDD ."},{"key":"e_1_3_2_2_68_1","unstructured":"Xiaojin Zhu Zoubin Ghahramani and John D Lafferty. 2003. Semi-supervised learning using gaussian fields and harmonic functions. In ICML .  Xiaojin Zhu Zoubin Ghahramani and John D Lafferty. 2003. Semi-supervised learning using gaussian fields and harmonic functions. In ICML ."},{"key":"e_1_3_2_2_69_1","doi-asserted-by":"crossref","unstructured":"Daniel Z\u00fcgner Amir Akbarnejad and Stephan G\u00fcnnemann. 2018. Adversarial attacks on neural networks for graph data. In KDD .  Daniel Z\u00fcgner Amir Akbarnejad and Stephan G\u00fcnnemann. 2018. Adversarial attacks on neural networks for graph data. In KDD .","DOI":"10.24963\/ijcai.2019\/872"},{"key":"e_1_3_2_2_70_1","doi-asserted-by":"crossref","unstructured":"Daniel Z\u00fcgner and Stephan G\u00fcnnemann. 2019 a. Adversarial attacks on graph neural networks via meta learning. In ICLR .  Daniel Z\u00fcgner and Stephan G\u00fcnnemann. 2019 a. Adversarial attacks on graph neural networks via meta learning. In ICLR .","DOI":"10.24963\/ijcai.2019\/872"},{"key":"e_1_3_2_2_71_1","doi-asserted-by":"crossref","unstructured":"Daniel Z\u00fcgner and Stephan G\u00fcnnemann. 2019 b. Certifiable Robustness and Robust Training for Graph Convolutional Networks. In KDD .  Daniel Z\u00fcgner and Stephan G\u00fcnnemann. 2019 b. Certifiable Robustness and Robust Training for Graph Convolutional Networks. In KDD .","DOI":"10.1145\/3292500.3330905"},{"key":"e_1_3_2_2_72_1","doi-asserted-by":"crossref","unstructured":"Daniel Z\u00fcgner and Stephan G\u00fcnnemann. 2020. Certifiable Robustness of Graph Convolutional Networks under Structure Perturbations. In KDD .  Daniel Z\u00fcgner and Stephan G\u00fcnnemann. 2020. Certifiable Robustness of Graph Convolutional Networks under Structure Perturbations. In KDD .","DOI":"10.1145\/3394486.3403217"}],"event":{"name":"KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining","location":"Virtual Event Singapore","acronym":"KDD '21","sponsor":["SIGMOD ACM Special Interest Group on Management of Data","SIGKDD ACM Special Interest Group on Knowledge Discovery in Data"]},"container-title":["Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &amp; Data Mining"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3447548.3467295","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3447548.3467295","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3447548.3467295","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:18:28Z","timestamp":1750191508000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3447548.3467295"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,8,14]]},"references-count":72,"alternative-id":["10.1145\/3447548.3467295","10.1145\/3447548"],"URL":"https:\/\/doi.org\/10.1145\/3447548.3467295","relation":{},"subject":[],"published":{"date-parts":[[2021,8,14]]},"assertion":[{"value":"2021-08-14","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}