{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:21:01Z","timestamp":1750220461199,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":57,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,5,24]],"date-time":"2021-05-24T00:00:00Z","timestamp":1621814400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["1937786"],"award-info":[{"award-number":["1937786"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,5,24]]},"DOI":"10.1145\/3433210.3437519","type":"proceedings-article","created":{"date-parts":[[2021,6,4]],"date-time":"2021-06-04T15:26:39Z","timestamp":1622820399000},"page":"2-13","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes"],"prefix":"10.1145","author":[{"given":"Jinyuan","family":"Jia","sequence":"first","affiliation":[{"name":"Duke University, Durham, NC, USA"}]},{"given":"Binghui","family":"Wang","sequence":"additional","affiliation":[{"name":"Duke University, Durham, NC, USA"}]},{"given":"Neil Zhenqiang","family":"Gong","sequence":"additional","affiliation":[{"name":"Duke University, Durham, NC, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,6,4]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"OSDI","author":"Abadi Martin","year":"2016","unstructured":"Martin Abadi , Paul Barham , Jianmin Chen , Zhifeng Chen , Andy Davis , Jeffrey Dean , Matthieu Devin , Sanjay Ghemawat , Geoffrey Irving , Michael Isard , : a system for large-scale machine learning . In OSDI , 2016 . Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: a system for large-scale machine learning. In OSDI, 2016."},{"key":"e_1_3_2_1_2_1","volume-title":"USENIX Security Symposium","author":"Adi Yossi","year":"2018","unstructured":"Yossi Adi , Carsten Baum , Moustapha Cisse , Benny Pinkas , and Joseph Keshet . Turning your weakness into a strength: Watermarking deep neural networks by backdooring . In USENIX Security Symposium , 2018 . Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, and Joseph Keshet. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In USENIX Security Symposium, 2018."},{"key":"e_1_3_2_1_3_1","volume-title":"Uci machine learning repository","author":"Asuncion Arthur","year":"2007","unstructured":"Arthur Asuncion and David Newman . Uci machine learning repository , 2007 . Arthur Asuncion and David Newman. Uci machine learning repository, 2007."},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1504\/IJSN.2015.071829"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICC.1993.397441"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICB.2013.6613006"},{"key":"e_1_3_2_1_7_1","volume-title":"ICML","author":"Biggio Battista","year":"2012","unstructured":"Battista Biggio , Blaine Nelson , and Pavel Laskov . Poisoning attacks against support vector machines . In ICML , 2012 . Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. In ICML, 2012."},{"key":"e_1_3_2_1_8_1","volume-title":"Targeted backdoor attacks on deep learning systems using data poisoning. In arXiv","author":"Chen Xinyun","year":"2017","unstructured":"Xinyun Chen , Chang Liu , Bo Li , Kimberly Lu , and Dawn Song . Targeted backdoor attacks on deep learning systems using data poisoning. In arXiv , 2017 . Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. In arXiv, 2017."},{"key":"e_1_3_2_1_9_1","volume-title":"BigLearn","author":"Collobert Ronan","year":"2011","unstructured":"Ronan Collobert , Koray Kavukcuoglu , and Cl\u00e9ment Farabet . Torch7 : A matlab-like environment for machine learning . In BigLearn , 2011 . Ronan Collobert, Koray Kavukcuoglu, and Cl\u00e9ment Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, 2011."},{"key":"e_1_3_2_1_10_1","volume-title":"IRE Conv. Rec., 3: 37--46","author":"Elias Peter","year":"1955","unstructured":"Peter Elias . Coding for noisy channels . IRE Conv. Rec., 3: 37--46 , 1955 . Peter Elias. Coding for noisy channels. IRE Conv. Rec., 3:37--46, 1955."},{"key":"e_1_3_2_1_11_1","volume-title":"USENIX Security Symposium","author":"Fang Minghong","year":"2020","unstructured":"Minghong Fang , Xiaoyu Cao , Jinyuan Jia , and Neil Gong . Local model poisoning attacks to byzantine-robust federated learning . In USENIX Security Symposium , 2020 . Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. Local model poisoning attacks to byzantine-robust federated learning. In USENIX Security Symposium, 2020."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3366423.3380072"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3274694.3274706"},{"key":"e_1_3_2_1_14_1","volume-title":"CCS","author":"Fredrikson Matt","year":"2015","unstructured":"Matt Fredrikson , Somesh Jha , and Thomas Ristenpart . Model inversion attacks that exploit confidence information and basic countermeasures . In CCS , 2015 . Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In CCS, 2015."},{"key":"e_1_3_2_1_15_1","volume-title":"USENIX Security Symposium","author":"Fredrikson Matthew","year":"2014","unstructured":"Matthew Fredrikson , Eric Lantz , Somesh Jha , Simon Lin , David Page , and Thomas Ristenpart . Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing . In USENIX Security Symposium , 2014 . Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In USENIX Security Symposium, 2014."},{"key":"e_1_3_2_1_16_1","volume-title":"CCS","author":"Ganju Karan","year":"2018","unstructured":"Karan Ganju , Qi Wang , Wei Yang , Carl A. Gunter , and Nikita Borisov . Property inference attacks on fully connected neural networks using permutation invariant representations . In CCS , 2018 . Karan Ganju, Qi Wang, Wei Yang, Carl A. Gunter, and Nikita Borisov. Property inference attacks on fully connected neural networks using permutation invariant representations. In CCS, 2018."},{"key":"e_1_3_2_1_17_1","unstructured":"Google AI Platform May 2019.  Google AI Platform May 2019."},{"key":"e_1_3_2_1_18_1","volume-title":"Machine Learning and Computer Security Workshop","author":"Gu Tianyu","year":"2017","unstructured":"Tianyu Gu , Brendan Dolan-Gavitt , and Siddharth Garg . Badnets : Identifying vulnerabilities in the machine learning model supply chain . In Machine Learning and Computer Security Workshop , 2017 . Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. In Machine Learning and Computer Security Workshop, 2017."},{"key":"e_1_3_2_1_19_1","volume-title":"ICLR","author":"Han Song","year":"2016","unstructured":"Song Han , Huizi Mao , and William J Dally . Deep compression : Compressing deep neural networks with pruning, trained quantization and huffman coding . In ICLR , 2016 . Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016."},{"key":"e_1_3_2_1_20_1","volume-title":"NeurIPS","author":"Han Song","year":"2015","unstructured":"Song Han , Jeff Pool , John Tran , and William Dally . Learning both weights and connections for efficient neural network . In NeurIPS , 2015 . Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In NeurIPS, 2015."},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_2_1_22_1","volume-title":"Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 0.5 mb model size. arXiv","author":"Iandola Forrest N","year":"2016","unstructured":"Forrest N Iandola , Song Han , Matthew W Moskewicz , Khalid Ashraf , William J Dally , and Kurt Keutzer . Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 0.5 mb model size. arXiv , 2016 . Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 0.5 mb model size. arXiv, 2016."},{"key":"e_1_3_2_1_23_1","unstructured":"IBM Watson Machine Learning May 2019.  IBM Watson Machine Learning May 2019."},{"key":"e_1_3_2_1_24_1","volume-title":"Manipulating machine learning: Poisoning attacks and countermeasures for regression learning","author":"Jagielski Matthew","year":"2018","unstructured":"Matthew Jagielski , Alina Oprea , Battista Biggio , Chang Liu , Cristina Nita-Rotaru , and Bo Li . Manipulating machine learning: Poisoning attacks and countermeasures for regression learning . In IEEE S & P , 2018 . Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In IEEE S & P, 2018."},{"key":"e_1_3_2_1_25_1","volume-title":"CCS","author":"Ji Yujie","year":"2018","unstructured":"Yujie Ji , Xinyang Zhang , Shouling Ji , Xiapu Luo , and Ting Wang . Model-reuse attacks on deep learning systems . In CCS , 2018 . Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, and Ting Wang. Model-reuse attacks on deep learning systems. In CCS, 2018."},{"key":"e_1_3_2_1_26_1","volume-title":"Intrinsic certified robustness of bagging against data poisoning attacks. arXiv preprint arXiv:2008.04495","author":"Jia Jinyuan","year":"2020","unstructured":"Jinyuan Jia , Xiaoyu Cao , and Neil Zhenqiang Gong . Intrinsic certified robustness of bagging against data poisoning attacks. arXiv preprint arXiv:2008.04495 , 2020 . Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. Intrinsic certified robustness of bagging against data poisoning attacks. arXiv preprint arXiv:2008.04495, 2020."},{"key":"e_1_3_2_1_27_1","volume-title":"CCS","author":"Jia Jinyuan","year":"2019","unstructured":"Jinyuan Jia , Ahmed Salem , Michael Backes , Yang Zhang , and Neil Zhenqiang Gong . Memguard : Defending against black-box membership inference attacks via adversarial examples . In CCS , 2019 . Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. Memguard: Defending against black-box membership inference attacks via adversarial examples. In CCS, 2019."},{"key":"e_1_3_2_1_28_1","unstructured":"Keras January 2019. https:\/\/keras.io\/.  Keras January 2019. https:\/\/keras.io\/."},{"key":"e_1_3_2_1_29_1","volume-title":"NeurIPS","author":"Li Bo","year":"2016","unstructured":"Bo Li , Yining Wang , Aarti Singh , and Yevgeniy Vorobeychik . Data poisoning attacks on factorization-based collaborative filtering . In NeurIPS , 2016 . Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. Data poisoning attacks on factorization-based collaborative filtering. In NeurIPS, 2016."},{"key":"e_1_3_2_1_30_1","volume-title":"ICLR","author":"Li Hao","year":"2017","unstructured":"Hao Li , Asim Kadav , Igor Durdanovic , Hanan Samet , and Hans Peter Graf . Pruning filters for efficient convnets . In ICLR , 2017 . Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. In ICLR, 2017."},{"key":"e_1_3_2_1_31_1","volume-title":"RAID","author":"Liu Kang","year":"2018","unstructured":"Kang Liu , Brendan Dolan-Gavitt , and Siddharth Garg . Fine-pruning : Defending against backdooring attacks on deep neural networks . In RAID , 2018 . Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Fine-pruning: Defending against backdooring attacks on deep neural networks. In RAID, 2018."},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23291"},{"key":"e_1_3_2_1_33_1","volume-title":"Emiliano De Cristofaro, and Vitaly Shmatikov. Exploiting unintended feature leakage in collaborative learning","author":"Melis Luca","year":"2019","unstructured":"Luca Melis , Congzheng Song , Emiliano De Cristofaro, and Vitaly Shmatikov. Exploiting unintended feature leakage in collaborative learning . In IEEE S & P , 2019 . Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. Exploiting unintended feature leakage in collaborative learning. In IEEE S & P, 2019."},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/JBHI.2014.2344095"},{"key":"e_1_3_2_1_35_1","volume-title":"CCS","author":"Nasr Milad","year":"2018","unstructured":"Milad Nasr , Reza Shokri , and Amir Houmansadr . Machine learning with membership privacy using adversarial regularization . In CCS , 2018 . Milad Nasr, Reza Shokri, and Amir Houmansadr. Machine learning with membership privacy using adversarial regularization. In CCS, 2018."},{"key":"e_1_3_2_1_36_1","volume-title":"LEET","author":"Nelson B.","year":"2008","unstructured":"B. Nelson , M. Barreno , F. J. Chi , A. D. Joseph , B. I. P. Rubinstein , U. Saini , C. Sutton , J. D. Tygar , and K. Xia . Exploiting machine learning to subvert your spam filter . In LEET , 2008 . B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia. Exploiting machine learning to subvert your spam filter. In LEET, 2008."},{"key":"e_1_3_2_1_37_1","volume-title":"ICIP","author":"Stefan Winkler Hong-Wei","year":"2014","unstructured":"Hong-Wei NG and Stefan Winkler . A data-driven approach to cleaning large face datasets . In ICIP , 2014 . Hong-Wei NG and Stefan Winkler. A data-driven approach to cleaning large face datasets. In ICIP, 2014."},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/JRPROC.1961.287814"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23183"},{"key":"e_1_3_2_1_41_1","volume-title":"ASPLOS","author":"Rouhani Bita Darvish","year":"2019","unstructured":"Bita Darvish Rouhani , Huili Chen , and Farinaz Koushanfar . Deepsigns : A generic watermarking framework for ip protection of deep learning models . In ASPLOS , 2019 . Bita Darvish Rouhani, Huili Chen, and Farinaz Koushanfar. Deepsigns: A generic watermarking framework for ip protection of deep learning models. In ASPLOS, 2019."},{"key":"e_1_3_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/1644893.1644895"},{"key":"e_1_3_2_1_43_1","volume-title":"NeurIPS","author":"Shafahi Ali","year":"2018","unstructured":"Ali Shafahi , W Ronny Huang , Mahyar Najibi , Octavian Suciu , Christoph Studer , Tudor Dumitras , and Tom Goldstein . Poison frogs! targeted clean-label poisoning attacks on neural networks . In NeurIPS , 2018 . Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! targeted clean-label poisoning attacks on neural networks. In NeurIPS, 2018."},{"key":"e_1_3_2_1_44_1","volume-title":"CCS","author":"Sharif Mahmood","year":"2016","unstructured":"Mahmood Sharif , Sruti Bhagavatula , Lujo Bauer , and K Michael Reiter . Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition . In CCS , 2016 . Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and K Michael Reiter. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In CCS, 2016."},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.41"},{"key":"e_1_3_2_1_46_1","volume-title":"CCS","author":"Song Congzheng","year":"2017","unstructured":"Congzheng Song , Thomas Ristenpart , and Vitaly Shmatikov . Machine learning models that remember too much . In CCS , 2017 . Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. Machine learning models that remember too much. In CCS, 2017."},{"key":"e_1_3_2_1_47_1","volume-title":"USENIX Security Symposium","author":"Suciu Octavian","year":"2018","unstructured":"Octavian Suciu , Radu Marginean , Yigitcan Kaya , Hal Daume III, and Tudor Dumitras . When does machine learning fail? generalized transferability for evasion and poisoning attacks . In USENIX Security Symposium , 2018 . Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daume III, and Tudor Dumitras. When does machine learning fail? generalized transferability for evasion and poisoning attacks. In USENIX Security Symposium, 2018."},{"key":"e_1_3_2_1_48_1","volume-title":"USENIX Security Symposium","author":"Tram\u00e8r Florian","year":"2016","unstructured":"Florian Tram\u00e8r , Fan Zhang , Ari Juels , Michael K Reiter , and Thomas Ristenpart . Stealing machine learning models via prediction apis . In USENIX Security Symposium , 2016 . Florian Tram\u00e8r, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. Stealing machine learning models via prediction apis. In USENIX Security Symposium, 2016."},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3078971.3078974"},{"key":"e_1_3_2_1_50_1","volume-title":"CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision","author":"Wang Binghui","year":"2020","unstructured":"Binghui Wang , Xiaoyu Cao , Jinyuan Jia , and Neil Zhenqiang Gong . On certifying robustness against backdoor attacks via randomized smoothing . In CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision , 2020 . Binghui Wang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. On certifying robustness against backdoor attacks via randomized smoothing. In CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision, 2020."},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2018.00038"},{"key":"e_1_3_2_1_52_1","volume-title":"ICML","author":"Xiao Huang","year":"2015","unstructured":"Huang Xiao , Battista Biggio , Gavin Brown , Giorgio Fumera , Claudia Eckert , and Fabio Roli . Is feature selection secure against training data poisoning ? In ICML , 2015 . Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, and Fabio Roli. Is feature selection secure against training data poisoning? In ICML, 2015."},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2017.23020"},{"key":"e_1_3_2_1_54_1","volume-title":"ICLR","author":"Zhang Chiyuan","year":"2017","unstructured":"Chiyuan Zhang , Samy Bengio , Moritz Hardt , Benjamin Recht , and Oriol Vinyals . Understanding deep learning requires rethinking generalization . In ICLR , 2017 . Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In ICLR, 2017."},{"key":"e_1_3_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/3196494.3196550"},{"key":"e_1_3_2_1_56_1","volume-title":"Backdoor attacks to graph neural networks. arXiv preprint arXiv:2006.11165","author":"Zhang Zaixi","year":"2020","unstructured":"Zaixi Zhang , Jinyuan Jia , Binghui Wang , and Neil Zhenqiang Gong . Backdoor attacks to graph neural networks. arXiv preprint arXiv:2006.11165 , 2020 . Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. Backdoor attacks to graph neural networks. arXiv preprint arXiv:2006.11165, 2020."},{"key":"e_1_3_2_1_57_1","volume-title":"ICLR","author":"Zoph Barret","year":"2017","unstructured":"Barret Zoph and Quoc V Le . Neural architecture search with reinforcement learning . In ICLR , 2017 . Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In ICLR, 2017."}],"event":{"name":"ASIA CCS '21: ACM Asia Conference on Computer and Communications Security","sponsor":["SIGSAC ACM Special Interest Group on Security, Audit, and Control"],"location":"Virtual Event Hong Kong","acronym":"ASIA CCS '21"},"container-title":["Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3433210.3437519","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3433210.3437519","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3433210.3437519","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:48:11Z","timestamp":1750193291000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3433210.3437519"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,24]]},"references-count":57,"alternative-id":["10.1145\/3433210.3437519","10.1145\/3433210"],"URL":"https:\/\/doi.org\/10.1145\/3433210.3437519","relation":{},"subject":[],"published":{"date-parts":[[2021,5,24]]},"assertion":[{"value":"2021-06-04","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}