{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T11:14:04Z","timestamp":1773227644099,"version":"3.50.1"},"reference-count":90,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2022,11,9]],"date-time":"2022-11-09T00:00:00Z","timestamp":1667952000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Key Research and Development Program of China","award":["2018AAA0101100"],"award-info":[{"award-number":["2018AAA0101100"]}]},{"name":"Hong Kong RGC","award":["TRS T41-603\/20-R"],"award-info":[{"award-number":["TRS T41-603\/20-R"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Intell. Syst. Technol."],"published-print":{"date-parts":[[2023,2,28]]},"abstract":"<jats:p>\n            In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms. On one hand, private and sensitive training data must be kept secure as much as possible in the presence of\n            <jats:italic>semi-honest<\/jats:italic>\n            partners; on the other hand, a certain amount of information has to be exchanged among different parties for the sake of learning utility. Such a challenge calls for the privacy-preserving federated learning solution, which maximizes the utility of the learned model and maintains a provable privacy guarantee of participating parties\u2019 private data.\n          <\/jats:p>\n          <jats:p>\n            This article illustrates a general framework that (1) formulates the trade-off between privacy loss and utility loss from a unified information-theoretic point of view, and (2) delineates quantitative bounds of the privacy-utility trade-off when different protection mechanisms including randomization, sparsity, and homomorphic encryption are used. It was shown that in general\n            <jats:italic>there is no free lunch for the privacy-utility trade-off<\/jats:italic>\n            , and one has to trade the preserving of privacy with a certain degree of degraded utility. The quantitative analysis illustrated in this article may serve as the guidance for the design of practical federated learning algorithms.\n          <\/jats:p>","DOI":"10.1145\/3563219","type":"journal-article","created":{"date-parts":[[2022,9,20]],"date-time":"2022-09-20T11:17:02Z","timestamp":1663672622000},"page":"1-35","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":36,"title":["No Free Lunch Theorem for Security and Utility in Federated Learning"],"prefix":"10.1145","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9065-6852","authenticated-orcid":false,"given":"Xiaojin","family":"Zhang","sequence":"first","affiliation":[{"name":"Hong Kong University of Science and Technology, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8266-4561","authenticated-orcid":false,"given":"Hanlin","family":"Gu","sequence":"additional","affiliation":[{"name":"Webank, Shenzhen"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8162-7096","authenticated-orcid":false,"given":"Lixin","family":"Fan","sequence":"additional","affiliation":[{"name":"Webank, Shenzhen"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2587-6028","authenticated-orcid":false,"given":"Kai","family":"Chen","sequence":"additional","affiliation":[{"name":"Hong Kong University of Science and Technology, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5059-8360","authenticated-orcid":false,"given":"Qiang","family":"Yang","sequence":"additional","affiliation":[{"name":"WeBank and Hong Kong University of Science and Technology, Hong Kong"}]}],"member":"320","published-online":{"date-parts":[[2022,11,9]]},"reference":[{"key":"e_1_3_3_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/2976749.2978318"},{"issue":"5","key":"e_1_3_3_3_2","first-page":"1333","article-title":"Privacy-preserving deep learning via additively homomorphic encryption","volume":"13","author":"Aono Le Trieu Phong, Yoshinori","year":"2017","unstructured":"Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. 2017. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security 13, 5 (2017), 1333\u20131345.","journal-title":"IEEE Transactions on Information Forensics and Security"},{"key":"e_1_3_3_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.2018.2865558"},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/MARK.1979.8817296"},{"key":"e_1_3_3_6_2","first-page":"675","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Blau Yochai","year":"2019","unstructured":"Yochai Blau and Tomer Michaeli. 2019. Rethinking lossy compression: The rate-distortion-perception tradeoff. In Proceedings of the International Conference on Machine Learning. 675\u2013685."},{"key":"e_1_3_3_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3133982"},{"key":"e_1_3_3_8_2","volume-title":"Bayesian Inference in Statistical Analysis","author":"Box George E. P.","year":"2011","unstructured":"George E. P. Box and George C. Tiao. 2011. Bayesian Inference in Statistical Analysis. Vol. 40. John Wiley & Sons."},{"key":"e_1_3_3_9_2","volume-title":"Convex Optimization","author":"Vandenberghe Lieven","year":"2004","unstructured":"Stephen Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press."},{"key":"e_1_3_3_10_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-32009-5_50"},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.1145\/2488608.2488680"},{"issue":"23","key":"e_1_3_3_12_2","article-title":"Deep learning with Gaussian differential privacy","volume":"2020","author":"Bu Zhiqi","year":"2020","unstructured":"Zhiqi Bu, Jinshuo Dong, Qi Long, and Weijie J Su. 2020. Deep learning with Gaussian differential privacy. Harvard Data Science Review 2020, 23 (2020).","journal-title":"Harvard Data Science Review"},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1007\/BFb0091539"},{"key":"e_1_3_3_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01212"},{"key":"e_1_3_3_15_2","article-title":"SecureBoost: A lossless federated learning framework","author":"Cheng Kewei","year":"2021","unstructured":"Kewei Cheng, Tao Fan, Yilun Jin, Yang Liu, Tianjian Chen, Dimitrios Papadopoulos, and Qiang Yang. 2021. SecureBoost: A lossless federated learning framework. arXiv:1901.08755 (2021).","journal-title":"arXiv:1901.08755"},{"key":"e_1_3_3_16_2","doi-asserted-by":"publisher","DOI":"10.1111\/j.2517-6161.1968.tb00722.x"},{"key":"e_1_3_3_17_2","article-title":"Bert: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).","journal-title":"arXiv preprint arXiv:1810.04805"},{"key":"e_1_3_3_18_2","article-title":"The total variation distance between high-dimensional Gaussians","author":"Devroye Luc","year":"2018","unstructured":"Luc Devroye, Abbas Mehrabian, and Tommy Reddad. 2018. The total variation distance between high-dimensional Gaussians. arXiv preprint arXiv:1810.08693 (2018).","journal-title":"arXiv preprint arXiv:1810.08693"},{"key":"e_1_3_3_19_2","article-title":"Unified language model pre-training for natural language understanding and generation","volume":"32","author":"Dong Li","year":"2019","unstructured":"Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/Allerton.2012.6483382"},{"key":"e_1_3_3_21_2","article-title":"Local privacy and minimax bounds: Sharp rates for probability estimation","author":"Duchi John C.","year":"2013","unstructured":"John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. 2013. Local privacy and minimax bounds: Sharp rates for probability estimation. arXiv preprint arXiv:1305.6000 (2013).","journal-title":"arXiv preprint arXiv:1305.6000"},{"key":"e_1_3_3_22_2","doi-asserted-by":"publisher","DOI":"10.1007\/11787006_1"},{"key":"e_1_3_3_23_2","doi-asserted-by":"publisher","DOI":"10.5555\/1791834.1791836"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1007\/11681878_14"},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.29012\/jpc.v2i1.585"},{"key":"e_1_3_3_26_2","doi-asserted-by":"publisher","DOI":"10.1561\/0400000042"},{"key":"e_1_3_3_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.2003.813506"},{"key":"e_1_3_3_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/2810103.2813677"},{"key":"e_1_3_3_29_2","article-title":"Inverting gradients\u2014How easy is it to break privacy in federated learning?","author":"Geiping Jonas","year":"2020","unstructured":"Jonas Geiping, Hartmut Bauermeister, Hannah Dr\u00f6ge, and Michael Moeller. 2020. Inverting gradients\u2014How easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053 (2020).","journal-title":"arXiv preprint arXiv:2003.14053"},{"key":"e_1_3_3_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.1984.4767596"},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.1984.4767596"},{"key":"e_1_3_3_32_2","volume-title":"A Fully Homomorphic Encryption Scheme","author":"Gentry Craig","year":"2009","unstructured":"Craig Gentry. 2009. A Fully Homomorphic Encryption Scheme. Stanford University, Stanford, CA."},{"key":"e_1_3_3_33_2","doi-asserted-by":"publisher","DOI":"10.5555\/1881412.1881422"},{"key":"e_1_3_3_34_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-40041-4_5"},{"key":"e_1_3_3_35_2","article-title":"Differentially private federated learning: A client level perspective","author":"Geyer Robin C.","year":"2017","unstructured":"Robin C. Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 (2017).","journal-title":"arXiv preprint arXiv:1712.07557"},{"key":"e_1_3_3_36_2","first-page":"2521","volume-title":"Proceedings of the International Conference on Artificial Intelligence and Statistics","author":"Girgis Antonious","year":"2021","unstructured":"Antonious Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, and Ananda Theertha Suresh. 2021. Shuffled model of differential privacy in federated learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics. 2521\u20132529."},{"key":"e_1_3_3_37_2","doi-asserted-by":"publisher","DOI":"10.1016\/0022-0000(84)90070-9"},{"key":"e_1_3_3_38_2","article-title":"Federated deep learning with Bayesian privacy","author":"Gu Hanlin","year":"2021","unstructured":"Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan Yao, and Qiang Yang. 2021. Federated deep learning with Bayesian privacy. arXiv preprint arXiv:2109.13012 (2021).","journal-title":"arXiv preprint arXiv:2109.13012"},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jnca.2018.05.003"},{"key":"e_1_3_3_40_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACOMP.2019.00022"},{"key":"e_1_3_3_41_2","article-title":"On the convergence of local descent methods in federated learning","author":"Haddadpour Farzin","year":"2019","unstructured":"Farzin Haddadpour and Mehrdad Mahdavi. 2019. On the convergence of local descent methods in federated learning. arXiv preprint arXiv:1910.14425 (2019).","journal-title":"arXiv preprint arXiv:1910.14425"},{"key":"e_1_3_3_42_2","doi-asserted-by":"publisher","DOI":"10.1145\/3359789.3359824"},{"key":"e_1_3_3_43_2","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134012"},{"key":"e_1_3_3_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISIT.2018.8437632"},{"key":"e_1_3_3_45_2","doi-asserted-by":"publisher","DOI":"10.14778\/2824032.2824067"},{"key":"e_1_3_3_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.2019.2962804"},{"key":"e_1_3_3_47_2","doi-asserted-by":"publisher","DOI":"10.1561\/2200000083"},{"key":"e_1_3_3_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00453"},{"key":"e_1_3_3_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP39728.2021.9413764"},{"key":"e_1_3_3_50_2","article-title":"Federated optimization: Distributed machine learning for on-device intelligence","author":"Kone\u010dn\u1ef3 Jakub","year":"2016","unstructured":"Jakub Kone\u010dn\u1ef3, H. Brendan McMahan, Daniel Ramage, and Peter Richt\u00e1rik. 2016. Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016).","journal-title":"arXiv preprint arXiv:1610.02527"},{"key":"e_1_3_3_51_2","article-title":"Federated learning: Strategies for improving communication efficiency","author":"Kone\u010dn\u1ef3 Jakub","year":"2016","unstructured":"Jakub Kone\u010dn\u1ef3, H. Brendan McMahan, Felix X. Yu, Peter Richt\u00e1rik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).","journal-title":"arXiv preprint arXiv:1610.05492"},{"key":"e_1_3_3_52_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403125"},{"key":"e_1_3_3_53_2","article-title":"On the convergence of FedAvg on non-IID data","author":"Li Xiang","year":"2019","unstructured":"Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2019. On the convergence of FedAvg on non-IID data. arXiv preprint arXiv:1907.02189 (2019).","journal-title":"arXiv preprint arXiv:1907.02189"},{"key":"e_1_3_3_54_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.2019.2935768"},{"key":"e_1_3_3_55_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10115-022-01664-x"},{"key":"e_1_3_3_56_2","doi-asserted-by":"publisher","DOI":"10.1109\/Allerton.2013.6736724"},{"key":"e_1_3_3_57_2","first-page":"1273","volume-title":"Proceedings of the 20th International Conference on Artificial Intelligence and Statistics","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. 1273\u20131282."},{"key":"e_1_3_3_58_2","article-title":"Federated learning of deep networks using model averaging","author":"McMahan H. Brendan","year":"2016","unstructured":"H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Ag\u00fcera y Arcas. 2016. Federated learning of deep networks using model averaging. arXiv preprint arXiv:1602.05629 (2016).","journal-title":"arXiv preprint arXiv:1602.05629"},{"key":"e_1_3_3_59_2","doi-asserted-by":"publisher","DOI":"10.1137\/100811970"},{"key":"e_1_3_3_60_2","doi-asserted-by":"publisher","DOI":"10.1109\/CSF.2017.11"},{"key":"e_1_3_3_61_2","doi-asserted-by":"publisher","DOI":"10.1007\/3-540-48910-X_16"},{"key":"e_1_3_3_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/1536414.1536461"},{"key":"e_1_3_3_63_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2019.2903658"},{"key":"e_1_3_3_64_2","doi-asserted-by":"publisher","DOI":"10.1145\/1568318.1568324"},{"issue":"11","key":"e_1_3_3_65_2","first-page":"169","article-title":"On data banks and privacy homomorphisms","volume":"4","author":"Rivest Ronald L.","year":"1978","unstructured":"Ronald L. Rivest, Len Adleman, and Michael L. Dertouzos. 1978. On data banks and privacy homomorphisms. Foundations of Secure Computation 4, 11 (1978), 169\u2013180.","journal-title":"Foundations of Secure Computation"},{"key":"e_1_3_3_66_2","doi-asserted-by":"publisher","DOI":"10.1109\/tifs.2013.2253320"},{"key":"e_1_3_3_67_2","doi-asserted-by":"publisher","DOI":"10.5555\/2796561.2796598"},{"key":"e_1_3_3_68_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISIT44484.2020.9174426"},{"key":"e_1_3_3_69_2","doi-asserted-by":"publisher","DOI":"10.1145\/359168.359176"},{"key":"e_1_3_3_70_2","doi-asserted-by":"publisher","DOI":"10.1145\/2810103.2813687"},{"key":"e_1_3_3_71_2","article-title":"SplitFed: When federated learning meets split learning","author":"Thapa Chandra","year":"2020","unstructured":"Chandra Thapa, Mahawaga Arachchige Pathum Chamikara, and Seyit Camtepe. 2020. SplitFed: When federated learning meets split learning. arXiv preprint arXiv:2004.12088 (2020).","journal-title":"arXiv preprint arXiv:2004.12088"},{"key":"e_1_3_3_72_2","article-title":"Rate distortion theory: A mathematical basis for data compression, Toby Berger. Prentice-Hall, Urbana, IL (1971)","volume":"24","author":"Tretiak Oleh J.","year":"1974","unstructured":"Oleh J. Tretiak. 1974. Rate distortion theory: A mathematical basis for data compression, Toby Berger. Prentice-Hall, Urbana, IL (1971). Information & Computation 24 (1974), 92\u201395.","journal-title":"Information & Computation"},{"key":"e_1_3_3_73_2","doi-asserted-by":"publisher","DOI":"10.1109\/BigData47090.2019.9005465"},{"key":"e_1_3_3_74_2","unstructured":"Aleksei Triastcyn and Boi Faltings. 2020. Bayesian differential privacy for machine learning. In Proceedings of the 37th International Conference on Machine Learning Hal Daum\u00e9 III and Aarti Singh (Eds.). Proceedings of Machine Learning Research Vol. 119. PMLR 9583\u20139592. https:\/\/proceedings.mlr.press\/v119\/triastcyn20a.html."},{"key":"e_1_3_3_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/3338501.3357370"},{"key":"e_1_3_3_76_2","doi-asserted-by":"publisher","DOI":"10.1145\/3378679.3394533"},{"key":"e_1_3_3_77_2","article-title":"Differentially private empirical risk minimization revisited: Faster and more general","volume":"30","author":"Wang Di","year":"2017","unstructured":"Di Wang, Minwei Ye, and Jinhui Xu. 2017. Differentially private empirical risk minimization revisited: Faster and more general. In Advances in Neural Information Processing Systems 30 (2017).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_78_2","doi-asserted-by":"publisher","DOI":"10.1109\/ALLERTON.2017.8262832"},{"key":"e_1_3_3_79_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM.2019.8737416"},{"key":"e_1_3_3_80_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-0-387-21736-9_11"},{"key":"e_1_3_3_81_2","doi-asserted-by":"publisher","DOI":"10.1109\/4235.585893"},{"key":"e_1_3_3_82_2","doi-asserted-by":"publisher","DOI":"10.1145\/3298981"},{"key":"e_1_3_3_83_2","doi-asserted-by":"publisher","DOI":"10.2200\/S00960ED2V01Y201910AIM043"},{"key":"e_1_3_3_84_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01607"},{"key":"e_1_3_3_85_2","first-page":"493","volume-title":"Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC\u201920)","author":"Zhang Chengliang","year":"2020","unstructured":"Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC\u201920). 493\u2013506. https:\/\/www.usenix.org\/conference\/atc20\/presentation\/zhang-chengliang."},{"key":"e_1_3_3_86_2","first-page":"493","volume-title":"Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC\u201920)","author":"Zhang Chengliang","year":"2020","unstructured":"Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC\u201920). 493\u2013506."},{"key":"e_1_3_3_87_2","doi-asserted-by":"publisher","DOI":"10.1109\/GLOBECOM38437.2019.9014272"},{"key":"e_1_3_3_88_2","article-title":"iDLG: Improved deep leakage from gradients","author":"Zhao Bo","year":"2020","unstructured":"Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2020. iDLG: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020).","journal-title":"arXiv preprint arXiv:2001.02610"},{"key":"e_1_3_3_89_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2020.3037194"},{"key":"e_1_3_3_90_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-63076-8_2"},{"key":"e_1_3_3_91_2","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS\u201919)","author":"Zhu Ligeng","year":"2019","unstructured":"Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS\u201919)."}],"container-title":["ACM Transactions on Intelligent Systems and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3563219","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3563219","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:38:10Z","timestamp":1750178290000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3563219"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,9]]},"references-count":90,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2023,2,28]]}},"alternative-id":["10.1145\/3563219"],"URL":"https:\/\/doi.org\/10.1145\/3563219","relation":{},"ISSN":["2157-6904","2157-6912"],"issn-type":[{"value":"2157-6904","type":"print"},{"value":"2157-6912","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,11,9]]},"assertion":[{"value":"2022-03-10","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-08-24","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-11-09","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}