{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,11]],"date-time":"2026-02-11T13:57:12Z","timestamp":1770818232162,"version":"3.50.1"},"reference-count":159,"publisher":"Association for Computing Machinery (ACM)","issue":"7","license":[{"start":{"date-parts":[[2021,7,18]],"date-time":"2021-07-18T00:00:00Z","timestamp":1626566400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["2018YFB1004704"],"award-info":[{"award-number":["2018YFB1004704"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Hong Kong RGC Research Impact Fund","award":["R5060-19 and R5034-18"],"award-info":[{"award-number":["R5060-19 and R5034-18"]}]},{"DOI":"10.13039\/501100010877","name":"Shenzhen Science and Technology Innovation Commission","doi-asserted-by":"crossref","award":["R2020A045"],"award-info":[{"award-number":["R2020A045"]}],"id":[{"id":"10.13039\/501100010877","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"Collaborative Innovation Center of Novel Software Technology and Industrialization","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Hong Kong RGC General Research Fund","award":["152221\/19E and 15220320\/20E"],"award-info":[{"award-number":["152221\/19E and 15220320\/20E"]}]},{"DOI":"10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities","doi-asserted-by":"publisher","award":["B200202176"],"award-info":[{"award-number":["B200202176"]}],"id":[{"id":"10.13039\/501100012226","id-type":"DOI","asserted-by":"publisher"}]},{"name":"RCN-Diku INTPART BDEM","award":["261685"],"award-info":[{"award-number":["261685"]}]},{"DOI":"10.13039\/100017440","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61832005, 61872310, and 61872171"],"award-info":[{"award-number":["61832005, 61872310, and 61872171"]}],"id":[{"id":"10.13039\/100017440","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Hong Kong RGC Collaborative Research Fund","award":["C5026-18G"],"award-info":[{"award-number":["C5026-18G"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2022,9,30]]},"abstract":"<jats:p>\n            <jats:bold>Machine Learning<\/jats:bold>\n            (\n            <jats:bold>ML<\/jats:bold>\n            ) has demonstrated great promise in various fields, e.g., self-driving, smart city, which are fundamentally altering the way individuals and organizations live, work, and interact. Traditional centralized learning frameworks require uploading all training data from different sources to a remote data server, which incurs significant communication overhead, service latency, and privacy issues.\n          <\/jats:p>\n          <jats:p>\n            To further extend the frontiers of the learning paradigm, a new learning concept, namely,\n            <jats:bold>Edge Learning<\/jats:bold>\n            (\n            <jats:bold>EL<\/jats:bold>\n            ) is emerging. It is complementary to the cloud-based methods for big data analytics by enabling distributed edge nodes to cooperatively training models and conduct inferences with their locally cached data. To explore the new characteristics and potential prospects of EL, we conduct a comprehensive survey of the recent research efforts on EL. Specifically, we first introduce the background and motivation. We then discuss the challenging issues in EL from the aspects of data, computation, and communication. Furthermore, we provide an overview of the enabling technologies for EL, including model training, inference, security guarantee, privacy protection, and incentive mechanism. Finally, we discuss future research opportunities on EL. We believe that this survey will provide a comprehensive overview of EL and stimulate fruitful future research in this field.\n          <\/jats:p>","DOI":"10.1145\/3464419","type":"journal-article","created":{"date-parts":[[2021,7,18]],"date-time":"2021-07-18T16:07:33Z","timestamp":1626624453000},"page":"1-36","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":37,"title":["Edge Learning"],"prefix":"10.1145","volume":"54","author":[{"given":"Jie","family":"Zhang","sequence":"first","affiliation":[{"name":"The Hong Kong Polytechnic University, China"}]},{"given":"Zhihao","family":"Qu","sequence":"additional","affiliation":[{"name":"Hohai University, The Hong Kong Polytechnic University, China"}]},{"given":"Chenxi","family":"Chen","sequence":"additional","affiliation":[{"name":"Nanjing University, China"}]},{"given":"Haozhao","family":"Wang","sequence":"additional","affiliation":[{"name":"Huazhong University of Science and Technology, The Hong Kong Polytechnic University, China"}]},{"given":"Yufeng","family":"Zhan","sequence":"additional","affiliation":[{"name":"The Hong Kong Polytechnic University, China"}]},{"given":"Baoliu","family":"Ye","sequence":"additional","affiliation":[{"name":"Nanjing University, China"}]},{"given":"Song","family":"Guo","sequence":"additional","affiliation":[{"name":"The Hong Kong Polytechnic University, China"}]}],"member":"320","published-online":{"date-parts":[[2021,7,18]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Proc. of ICML.","author":"Agarwal Naman","year":"2017","unstructured":"Naman Agarwal and Karan Singh . 2017 . The price of differential privacy for online learning . In Proc. of ICML. Naman Agarwal and Karan Singh. 2017. The price of differential privacy for online learning. In Proc. of ICML."},{"key":"e_1_2_1_2_1","volume-title":"Proc. of NeurIPS. 1709\u20131720","author":"Alistarh Dan","year":"2017","unstructured":"Dan Alistarh , Demjan Grubic , Jerry Li , Ryota Tomioka , and Milan Vojnovic . 2017 . QSGD: Communication-efficient SGD via gradient quantization and encoding . In Proc. of NeurIPS. 1709\u20131720 . Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. 2017. QSGD: Communication-efficient SGD via gradient quantization and encoding. In Proc. of NeurIPS. 1709\u20131720."},{"key":"e_1_2_1_3_1","volume-title":"Proc. of NeurIPS. 5973\u20135983","author":"Alistarh Dan","year":"2018","unstructured":"Dan Alistarh , Torsten Hoefler , Mikael Johansson , Nikola Konstantinov , Sarit Khirirat , and C\u00e9dric Renggli . 2018 . The convergence of sparsified gradient methods . In Proc. of NeurIPS. 5973\u20135983 . Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, and C\u00e9dric Renggli. 2018. The convergence of sparsified gradient methods. In Proc. of NeurIPS. 5973\u20135983."},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISIT.2019.8849334"},{"key":"e_1_2_1_5_1","volume-title":"Proc. of NeurIPS. 5151\u20135159","author":"Banner Ron","year":"2018","unstructured":"Ron Banner , Itay Hubara , Elad Hoffer , and Daniel Soudry . 2018 . Scalable methods for 8-bit training of neural networks . In Proc. of NeurIPS. 5151\u20135159 . Ron Banner, Itay Hubara, Elad Hoffer, and Daniel Soudry. 2018. Scalable methods for 8-bit training of neural networks. In Proc. of NeurIPS. 5151\u20135159."},{"issue":"2019","key":"e_1_2_1_6_1","first-page":"634","article-title":"Analyzing federated learning through an adversarial lens","volume":"97","author":"Bhagoji Arjun Nitin","year":"2018","unstructured":"Arjun Nitin Bhagoji , Supriyo Chakraborty , Prateek Mittal , and Seraphin Calo . 2018 . Analyzing federated learning through an adversarial lens . Proc. of ICML 97 ( 2019 ), 634 \u2013 643 . Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2018. Analyzing federated learning through an adversarial lens. Proc. of ICML 97 (2019), 634\u2013643.","journal-title":"Proc. of ICML"},{"key":"e_1_2_1_7_1","volume-title":"Proc. of NeurIPS. 119\u2013129","author":"Blanchard Peva","year":"2017","unstructured":"Peva Blanchard , Rachid Guerraoui , Julien Stainer , et\u00a0al. 2017 . Machine learning with adversaries: Byzantine tolerant gradient descent . In Proc. of NeurIPS. 119\u2013129 . Peva Blanchard, Rachid Guerraoui, Julien Stainer, et\u00a0al. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proc. of NeurIPS. 119\u2013129."},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3133982"},{"key":"e_1_2_1_9_1","volume-title":"Low latency privacy preserving inference. CoRR abs\/1812.10659","author":"Brutzkus Alon","year":"2018","unstructured":"Alon Brutzkus , Oren Elisha , and Ran Gilad-Bachrach . 2018. Low latency privacy preserving inference. CoRR abs\/1812.10659 ( 2018 ). arxiv:1812.10659.http:\/\/arxiv.org\/abs\/1812.10659. Alon Brutzkus, Oren Elisha, and Ran Gilad-Bachrach. 2018. Low latency privacy preserving inference. CoRR abs\/1812.10659 (2018). arxiv:1812.10659.http:\/\/arxiv.org\/abs\/1812.10659."},{"key":"e_1_2_1_10_1","volume-title":"Dulloor","author":"Canel Christopher","year":"2019","unstructured":"Christopher Canel , Thomas Kim , Giulio Zhou , Conglong Li , Hyeontaek Lim , David G. Andersen , Michael Kaminsky , and Subramanya R . Dulloor . 2019 . Scaling video analytics on constrained edge nodes. arxiv:1905.13536.http:\/\/arxiv.org\/abs\/1905.13536. Christopher Canel, Thomas Kim, Giulio Zhou, Conglong Li, Hyeontaek Lim, David G. Andersen, Michael Kaminsky, and Subramanya R. Dulloor. 2019. Scaling video analytics on constrained edge nodes. arxiv:1905.13536.http:\/\/arxiv.org\/abs\/1905.13536."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3267809.3275463"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11728"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2019.2921977"},{"key":"e_1_2_1_14_1","first-page":"902","article-title":"Draco: Byzantine-resilient distributed training via redundant gradients","volume":"80","author":"Chen Lingjiao","year":"2018","unstructured":"Lingjiao Chen , Hongyi Wang , Zachary Charles , and Dimitris Papailiopoulos . 2018 . Draco: Byzantine-resilient distributed training via redundant gradients . Proc. of ICML 80 (2018), 902 \u2013 911 . Lingjiao Chen, Hongyi Wang, Zachary Charles, and Dimitris Papailiopoulos. 2018. Draco: Byzantine-resilient distributed training via redundant gradients. Proc. of ICML 80 (2018), 902\u2013911.","journal-title":"Proc. of ICML"},{"key":"e_1_2_1_15_1","volume-title":"Proc. of NeurIPS. 5050\u20135060","author":"Chen Tianyi","year":"2018","unstructured":"Tianyi Chen , Georgios Giannakis , Tao Sun , and Wotao Yin . 2018 . LAG: Lazily aggregated gradient for communication-efficient distributed learning . In Proc. of NeurIPS. 5050\u20135060 . Tianyi Chen, Georgios Giannakis, Tao Sun, and Wotao Yin. 2018. LAG: Lazily aggregated gradient for communication-efficient distributed learning. In Proc. of NeurIPS. 5050\u20135060."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3154503"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/JETCAS.2013.2244771"},{"key":"e_1_2_1_18_1","volume-title":"Adascale: Towards real-time video object detection using adaptive scaling. arXiv:1902.02910","author":"Chin Ting-Wu","year":"2019","unstructured":"Ting-Wu Chin , Ruizhou Ding , and Diana Marculescu . 2019 . Adascale: Towards real-time video object detection using adaptive scaling. arXiv:1902.02910 . http:\/\/arxiv.org\/abs\/1902.02910. Ting-Wu Chin, Ruizhou Ding, and Diana Marculescu. 2019. Adascale: Towards real-time video object detection using adaptive scaling. arXiv:1902.02910. http:\/\/arxiv.org\/abs\/1902.02910."},{"key":"e_1_2_1_19_1","doi-asserted-by":"crossref","unstructured":"Yi-Min Chou Yi-Ming Chan Jia-Hong Lee Chih-Yi Chiu and Chu-Song Chen. 2018. Unifying and merging well-trained deep neural networks for inference stage. arXiv:1805.04980. http:\/\/arxiv.org\/abs\/1805.04980.  Yi-Min Chou Yi-Ming Chan Jia-Hong Lee Chih-Yi Chiu and Chu-Song Chen. 2018. Unifying and merging well-trained deep neural networks for inference stage. arXiv:1805.04980. http:\/\/arxiv.org\/abs\/1805.04980.","DOI":"10.24963\/ijcai.2018\/283"},{"key":"e_1_2_1_20_1","volume-title":"Seunghak Lee, Gregory R. Ganger, Garth Gibson, Kimberly Keeton, and Eric Xing.","author":"Cipar James","year":"2013","unstructured":"James Cipar , Qirong Ho , Jin Kyu Kim , Seunghak Lee, Gregory R. Ganger, Garth Gibson, Kimberly Keeton, and Eric Xing. 2013 . Solving the straggler problem with bounded staleness. Presented as part of the 14th Workshop on Hot Topics in Operating Systems . James Cipar, Qirong Ho, Jin Kyu Kim, Seunghak Lee, Gregory R. Ganger, Garth Gibson, Kimberly Keeton, and Eric Xing. 2013. Solving the straggler problem with bounded staleness. Presented as part of the 14th Workshop on Hot Topics in Operating Systems."},{"key":"e_1_2_1_21_1","volume-title":"Proc. of NeurIPS. 1223\u20131231","author":"Dean Jeffrey","year":"2012","unstructured":"Jeffrey Dean , Greg Corrado , Rajat Monga , Kai Chen , Matthieu Devin , Mark Mao , Marc\u2019aurelio Ranzato , Andrew Senior , Paul Tucker , Ke Yang , et\u00a0al. 2012 . Large scale distributed deep networks . In Proc. of NeurIPS. 1223\u20131231 . Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc\u2019aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, et\u00a0al. 2012. Large scale distributed deep networks. In Proc. of NeurIPS. 1223\u20131231."},{"key":"e_1_2_1_22_1","first-page":"1646","article-title":"SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives","volume":"2","author":"Defazio Aaron","year":"2014","unstructured":"Aaron Defazio , Francis Bach , and Simon Lacoste-Julien . 2014 . SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives . Proc. of NeurIPS 2 , 1646 \u2013 1654 . Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. 2014. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. Proc. of NeurIPS 2, 1646\u20131654.","journal-title":"Proc. of NeurIPS"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2020.2984887"},{"key":"e_1_2_1_24_1","volume-title":"Proc. of NeurIPS. 10976\u201310987","author":"Dennis Don Kurian","year":"2018","unstructured":"Don Kurian Dennis , Chirag Pabbaraju , Harsha Vardhan Simhadri , and Prateek Jain . 2018 . Multiple instance learning for efficient sequential data classification on resource-constrained devices . In Proc. of NeurIPS. 10976\u201310987 . Don Kurian Dennis, Chirag Pabbaraju, Harsha Vardhan Simhadri, and Prateek Jain. 2018. Multiple instance learning for efficient sequential data classification on resource-constrained devices. In Proc. of NeurIPS. 10976\u201310987."},{"key":"e_1_2_1_25_1","volume-title":"Proc. of NeurIPS. 1269\u20131277","author":"Denton Emily L.","year":"2014","unstructured":"Emily L. Denton , Wojciech Zaremba , Joan Bruna , Yann LeCun , and Rob Fergus . 2014 . Exploiting linear structure within convolutional networks for efficient evaluation . In Proc. of NeurIPS. 1269\u20131277 . Emily L. Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Exploiting linear structure within convolutional networks for efficient evaluation. In Proc. of NeurIPS. 1269\u20131277."},{"key":"e_1_2_1_26_1","volume-title":"Stochastic activation pruning for robust adversarial defense. CoRR abs\/1803.01442","author":"Dhillon Guneet S.","year":"2018","unstructured":"Guneet S. Dhillon , Kamyar Azizzadenesheli , Zachary C. Lipton , Jeremy Bernstein , Jean Kossaifi , Aran Khanna , and Anima Anandkumar . 2018. Stochastic activation pruning for robust adversarial defense. CoRR abs\/1803.01442 ( 2018 ). arxiv:1803.01442.http:\/\/arxiv.org\/abs\/1803.01442. Guneet S. Dhillon, Kamyar Azizzadenesheli, Zachary C. Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, and Anima Anandkumar. 2018. Stochastic activation pruning for robust adversarial defense. CoRR abs\/1803.01442 (2018). arxiv:1803.01442.http:\/\/arxiv.org\/abs\/1803.01442."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3293883.3295713"},{"key":"e_1_2_1_28_1","article-title":"Adaptive subgradient methods for online learning and stochastic optimization","author":"Duchi John","year":"2011","unstructured":"John Duchi , Elad Hazan , and Yoram Singer . 2011 . Adaptive subgradient methods for online learning and stochastic optimization . Journal of Machine Learning Research 12 ( July 2011), 2121\u20132159. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12 (July 2011), 2121\u20132159.","journal-title":"Journal of Machine Learning Research 12"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/iThings\/GreenCom\/CPSCom\/SmartData.2019.00148"},{"key":"e_1_2_1_30_1","volume-title":"Proc. of AISTATS","volume":"84","author":"Ge Jason","year":"2018","unstructured":"Jason Ge , Zhaoran Wang , Mengdi Wang , and Han Liu . 2018 . Minimax-optimal privacy-preserving sparse PCA in distributed systems . In Proc. of AISTATS , Vol. 84 . PMLR, 1589\u20131598. Jason Ge, Zhaoran Wang, Mengdi Wang, and Han Liu. 2018. Minimax-optimal privacy-preserving sparse PCA in distributed systems. In Proc. of AISTATS, Vol. 84. PMLR, 1589\u20131598."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1137\/120880811"},{"key":"e_1_2_1_32_1","doi-asserted-by":"crossref","unstructured":"Irene Giacomelli Somesh Jha Marc Joye C. David Page and Kyonghwan Yoon. 2018. Privacy-preserving ridge regression with only linearly-homomorphic encryption. In Applied Cryptography and Network Security.  Irene Giacomelli Somesh Jha Marc Joye C. David Page and Kyonghwan Yoon. 2018. Privacy-preserving ridge regression with only linearly-homomorphic encryption. In Applied Cryptography and Network Security.","DOI":"10.1007\/978-3-319-93387-0_13"},{"key":"e_1_2_1_33_1","unstructured":"Dibakar Gope Ganesh Dasika and Matthew Mattina. 2019. Ternary hybrid neural-tree networks for highly constrained IoT applications. arxiv:cs.LG\/1903.01531. http:\/\/arxiv.org\/abs\/1903.01531.  Dibakar Gope Ganesh Dasika and Matthew Mattina. 2019. Ternary hybrid neural-tree networks for highly constrained IoT applications. arxiv:cs.LG\/1903.01531. http:\/\/arxiv.org\/abs\/1903.01531."},{"key":"e_1_2_1_34_1","unstructured":"Renjie Gu Shuo Yang and Fan Wu. 2019. Distributed machine learning on mobile devices: A survey. arXiv:1909.08329. http:\/\/arxiv.org\/abs\/1909.08329.  Renjie Gu Shuo Yang and Fan Wu. 2019. Distributed machine learning on mobile devices: A survey. arXiv:1909.08329. http:\/\/arxiv.org\/abs\/1909.08329."},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3241539.3241557"},{"key":"e_1_2_1_36_1","volume-title":"Proc. of NeurIPS.","author":"Guo Yiwen","year":"2018","unstructured":"Yiwen Guo , Chao Zhang , Changshui Zhang , and Yurong Chen . 2018 . Sparse DNNs with improved adversarial robustness . In Proc. of NeurIPS. Yiwen Guo, Chao Zhang, Changshui Zhang, and Yurong Chen. 2018. Sparse DNNs with improved adversarial robustness. In Proc. of NeurIPS."},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2019.2912703"},{"key":"e_1_2_1_38_1","first-page":"669","article-title":"Secure multiple linear regression based on homomorphic encryption","volume":"27","author":"Hall Rob","year":"2011","unstructured":"Rob Hall , Stephen E. Fienberg , and Yuval Nardi . 2011 . Secure multiple linear regression based on homomorphic encryption . Journal of Official Statistics 27 , 4 (2011), 669 \u2013 691 . Rob Hall, Stephen E. Fienberg, and Yuval Nardi. 2011. Secure multiple linear regression based on homomorphic encryption. Journal of Official Statistics 27, 4 (2011), 669\u2013691.","journal-title":"Journal of Official Statistics"},{"key":"e_1_2_1_39_1","volume-title":"Dally","author":"Han Song","year":"2015","unstructured":"Song Han , Huizi Mao , and William J . Dally . 2015 . Deep compression: Compressing de ep neural networks with pruning, trained quantization and huffman coding. arXiv:1510.00149. https:\/\/arxiv.org\/abs\/1510.00149. Song Han, Huizi Mao, and William J. Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv:1510.00149. https:\/\/arxiv.org\/abs\/1510.00149."},{"key":"e_1_2_1_40_1","volume-title":"Proc. of NeurIPS. 1135\u20131143","author":"Han Song","year":"2015","unstructured":"Song Han , Jeff Pool , John Tran , and William Dally . 2015 . Learning both weights and connections for efficient neural network . In Proc. of NeurIPS. 1135\u20131143 . Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. In Proc. of NeurIPS. 1135\u20131143."},{"key":"e_1_2_1_41_1","volume-title":"Proc. of ICLR.","author":"He Warren","year":"2018","unstructured":"Warren He , Bo Li , and Dawn Song . 2018 . Decision boundary analysis of adversarial examples . In Proc. of ICLR. Warren He, Bo Li, and Dawn Song. 2018. Decision boundary analysis of adversarial examples. In Proc. of ICLR."},{"key":"e_1_2_1_42_1","volume-title":"Proc. of NeurIPS. 3226\u20133235","author":"Heikkil\u00e4 Mikko","year":"2017","unstructured":"Mikko Heikkil\u00e4 , Eemil Lagerspetz , Samuel Kaski , Kana Shimizu , Sasu Tarkoma , and Antti Honkela . 2017 . Differentially private Bayesian learning on distributed data . In Proc. of NeurIPS. 3226\u20133235 . Mikko Heikkil\u00e4, Eemil Lagerspetz, Samuel Kaski, Kana Shimizu, Sasu Tarkoma, and Antti Honkela. 2017. Differentially private Bayesian learning on distributed data. In Proc. of NeurIPS. 3226\u20133235."},{"key":"e_1_2_1_43_1","unstructured":"Ehsan Hesamifard Hassan Takabi and Mehdi Ghasemi. 2017. CryptoDL: Deep neural networks over encrypted data. arXiv:1711.05189. http:\/\/arxiv.org\/abs\/1711.05189.  Ehsan Hesamifard Hassan Takabi and Mehdi Ghasemi. 2017. CryptoDL: Deep neural networks over encrypted data. arXiv:1711.05189. http:\/\/arxiv.org\/abs\/1711.05189."},{"key":"e_1_2_1_44_1","volume-title":"COURSERA: Neural networks for machine learning. Lecture 9c: Using noise as a regularizer.","author":"Hinton Geoffrey","year":"2012","unstructured":"Geoffrey Hinton , N. Srivastava , K. Swersky , T. Tieleman , and A. R. Mohamed . 2012 . COURSERA: Neural networks for machine learning. Lecture 9c: Using noise as a regularizer. Geoffrey Hinton, N. Srivastava, K. Swersky, T. Tieleman, and A. R. Mohamed. 2012. COURSERA: Neural networks for machine learning. Lecture 9c: Using noise as a regularizer."},{"key":"e_1_2_1_45_1","unstructured":"Geoffrey Hinton Oriol Vinyals and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531. http:\/\/arxiv.org\/abs\/1503.02531.  Geoffrey Hinton Oriol Vinyals and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531. http:\/\/arxiv.org\/abs\/1503.02531."},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134012"},{"key":"e_1_2_1_47_1","first-page":"1","article-title":"GRNN: Low-latency and scalable RNN inference on GPUs","volume":"41","author":"Holmes Connor","year":"2019","unstructured":"Connor Holmes , Daniel Mawhirter , Yuxiong He , Feng Yan , and Bo Wu . 2019 . GRNN: Low-latency and scalable RNN inference on GPUs . In Proc. of EuroSys. ACM , 41 : 1 \u2013 41 :16. Connor Holmes, Daniel Mawhirter, Yuxiong He, Feng Yan, and Bo Wu. 2019. GRNN: Low-latency and scalable RNN inference on GPUs. In Proc. of EuroSys. ACM, 41:1\u201341:16.","journal-title":"Proc. of EuroSys. ACM"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1145\/2534169.2486006"},{"key":"e_1_2_1_49_1","volume-title":"Loadaboost: Loss-based adaboost federated machine learning on medical data. arXiv:1811.12629","author":"Huang Li","year":"2018","unstructured":"Li Huang , Yifeng Yin , Zeng Fu , Shifa Zhang , Hao Deng , and Dianbo Liu . 2018 . Loadaboost: Loss-based adaboost federated machine learning on medical data. arXiv:1811.12629 . http:\/\/arxiv.org\/abs\/1811.12629. Li Huang, Yifeng Yin, Zeng Fu, Shifa Zhang, Hao Deng, and Dianbo Liu. 2018. Loadaboost: Loss-based adaboost federated machine learning on medical data. arXiv:1811.12629. http:\/\/arxiv.org\/abs\/1811.12629."},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3187009.3177734"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2018.00057"},{"key":"e_1_2_1_52_1","volume-title":"Proc. of NeurIPS. 6343\u20136354","author":"Jayaraman Bargav","year":"2018","unstructured":"Bargav Jayaraman , Lingxiao Wang , David Evans , and Quanquan Gu . 2018 . Distributed learning without distress: Privacy-preserving empirical risk minimization . In Proc. of NeurIPS. 6343\u20136354 . Bargav Jayaraman, Lingxiao Wang, David Evans, and Quanquan Gu. 2018. Distributed learning without distress: Privacy-preserving empirical risk minimization. In Proc. of NeurIPS. 6343\u20136354."},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3267809.3267828"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3243734.3243757"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/3035918.3035933"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/3307681.3326608"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.5555\/3357034.3357049"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3302424.3303950"},{"key":"e_1_2_1_59_1","volume-title":"Kingma and Jimmy Ba","author":"Diederik","year":"2014","unstructured":"Diederik P. Kingma and Jimmy Ba . 2014 . Adam : A method for stochastic optimization. arXiv:1412.6980. http:\/\/arxiv.org\/abs\/1811.12629. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980. http:\/\/arxiv.org\/abs\/1811.12629."},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.3389\/fams.2018.00062"},{"key":"e_1_2_1_61_1","volume-title":"et\u00a0al","author":"Lei Lei","year":"2019","unstructured":"Lei Lei , Yue Tan , Shiwen Liu , Kan Zheng , et\u00a0al . 2019 . Deep reinforcement learning for autonomous internet of things: Model , applications and challenges. arXiv:1907.09059. http:\/\/arxiv.org\/abs\/1907.09059. Lei Lei, Yue Tan, Shiwen Liu, Kan Zheng, et\u00a0al. 2019. Deep reinforcement learning for autonomous internet of things: Model, applications and challenges. arXiv:1907.09059. http:\/\/arxiv.org\/abs\/1907.09059."},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.5555\/2685048.2685095"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1109\/MNET.2015.7166194"},{"key":"e_1_2_1_64_1","volume-title":"Ameet Talwalkar, and Virginia Smith.","author":"Li Tian","year":"2019","unstructured":"Tian Li , Anit Kumar Sahu , Ameet Talwalkar, and Virginia Smith. 2019 . Federated learning: Challenges , methods, and future directions. arXiv:1908.07873. http:\/\/arxiv.org\/abs\/1908.07873. Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2019. Federated learning: Challenges, methods, and future directions. arXiv:1908.07873. http:\/\/arxiv.org\/abs\/1908.07873."},{"key":"e_1_2_1_65_1","unstructured":"Yuanzhi Li Tengyu Ma and Hongyang Zhang. 2017. Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations. arXiv:1712.09203. https:\/\/arxiv.org\/abs\/1712.09203.  Yuanzhi Li Tengyu Ma and Hongyang Zhang. 2017. Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations. arXiv:1712.09203. https:\/\/arxiv.org\/abs\/1712.09203."},{"key":"e_1_2_1_66_1","volume-title":"Proc. of NeurIPS. 5330\u20135340","author":"Lian Xiangru","year":"2017","unstructured":"Xiangru Lian , Ce Zhang , Huan Zhang , Cho-Jui Hsieh , Wei Zhang , and Ji Liu . 2017 . Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent . In Proc. of NeurIPS. 5330\u20135340 . Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu. 2017. Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent. In Proc. of NeurIPS. 5330\u20135340."},{"key":"e_1_2_1_67_1","volume-title":"Proc. of NeurIPS. 2566\u20132576","author":"Ligett Katrina","unstructured":"Katrina Ligett , Seth Neel , Aaron Roth , Bo Waggoner , and Steven Z. Wu . 2017. Accuracy first: Selecting a differential privacy level for accuracy constrained ERM . In Proc. of NeurIPS. 2566\u20132576 . Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, and Steven Z. Wu. 2017. Accuracy first: Selecting a differential privacy level for accuracy constrained ERM. In Proc. of NeurIPS. 2566\u20132576."},{"key":"e_1_2_1_68_1","volume-title":"Dinh Thai Hoang, Yutao Jiao, Ying-Chang Liang, Qiang Yang, Dusit Niyato, and Chunyan Miao.","author":"Bryan Lim Wei Yang","year":"2019","unstructured":"Wei Yang Bryan Lim , Nguyen Cong Luong , Dinh Thai Hoang, Yutao Jiao, Ying-Chang Liang, Qiang Yang, Dusit Niyato, and Chunyan Miao. 2019 . Federated learning in mobile edge networks: A comprehensive survey. arXiv:1909.11875. http:\/\/arxiv.org\/abs\/1909.11875. Wei Yang Bryan Lim, Nguyen Cong Luong, Dinh Thai Hoang, Yutao Jiao, Ying-Chang Liang, Qiang Yang, Dusit Niyato, and Chunyan Miao. 2019. Federated learning in mobile edge networks: A comprehensive survey. arXiv:1909.11875. http:\/\/arxiv.org\/abs\/1909.11875."},{"key":"e_1_2_1_69_1","volume-title":"Dally","author":"Lin Yujun","year":"2017","unstructured":"Yujun Lin , Song Han , Huizi Mao , Yu Wang , and William J . Dally . 2017 . Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv:1712.01887. http:\/\/arxiv.org\/abs\/1712.01887. Yujun Lin, Song Han, Huizi Mao, Yu Wang, and William J. Dally. 2017. Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv:1712.01887. http:\/\/arxiv.org\/abs\/1712.01887."},{"key":"e_1_2_1_70_1","unstructured":"Fang Liu and Ness Shroff. 2019. Data poisoning attacks on stochastic bandits. arXiv:1905.06494. http:\/\/arxiv.org\/abs\/1905.06494.  Fang Liu and Ness Shroff. 2019. Data poisoning attacks on stochastic bandits. arXiv:1905.06494. http:\/\/arxiv.org\/abs\/1905.06494."},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.comnet.2017.03.015"},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1145\/3297858.3304009"},{"key":"e_1_2_1_73_1","volume-title":"Proc. of ICDCS. IEEE, 954\u2013964","author":"Luping Wang","year":"2019","unstructured":"Wang Luping , Wang Wei , and Li Bo . 2019 . CMFL: Mitigating communication overhead for federated learning . In Proc. of ICDCS. IEEE, 954\u2013964 . Wang Luping, Wang Wei, and Li Bo. 2019. CMFL: Mitigating communication overhead for federated learning. In Proc. of ICDCS. IEEE, 954\u2013964."},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2017.2682318"},{"key":"e_1_2_1_75_1","unstructured":"Aleksander Madry Aleksandar Makelov Ludwig Schmidt Dimitris Tsipras and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arxiv:stat.ML\/1706.06083. http:\/\/arxiv.org\/abs\/1706.06083.  Aleksander Madry Aleksandar Makelov Ludwig Schmidt Dimitris Tsipras and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arxiv:stat.ML\/1706.06083. http:\/\/arxiv.org\/abs\/1706.06083."},{"key":"e_1_2_1_76_1","volume-title":"Proc. of ICML. 4274\u20134283","author":"Mahloujifar Saeed","year":"2019","unstructured":"Saeed Mahloujifar , Mohammad Mahmoody , and Ameer Mohammed . 2019 . Data poisoning attacks in multi-party learning . In Proc. of ICML. 4274\u20134283 . Saeed Mahloujifar, Mohammad Mahmoody, and Ameer Mohammed. 2019. Data poisoning attacks in multi-party learning. In Proc. of ICML. 4274\u20134283."},{"key":"e_1_2_1_77_1","volume-title":"Mahoney","author":"Martin Charles H.","year":"2018","unstructured":"Charles H. Martin and Michael W . Mahoney . 2018 . Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. CoRR abs\/1810.01075 (2018). http:\/\/arxiv.org\/abs\/1810.01075. Charles H. Martin and Michael W. Mahoney. 2018. Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. CoRR abs\/1810.01075 (2018). http:\/\/arxiv.org\/abs\/1810.01075."},{"key":"e_1_2_1_78_1","volume-title":"et\u00a0al","author":"McMahan H. Brendan","year":"2016","unstructured":"H. Brendan McMahan , Eider Moore , Daniel Ramage , Seth Hampson , et\u00a0al . 2016 . Communication-efficient learning of deep networks from decentralized data. arXiv:1602.05629. https:\/\/arxiv.org\/abs\/1602.05629. H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et\u00a0al. 2016. Communication-efficient learning of deep networks from decentralized data. arXiv:1602.05629. https:\/\/arxiv.org\/abs\/1602.05629."},{"key":"e_1_2_1_79_1","volume-title":"Proc. of AISTATS.","author":"McMahan H. Brendan","year":"2017","unstructured":"H. Brendan McMahan , Eider Moore , Daniel Ramage , Seth Hampson , et\u00a0al. 2017 . Communication-efficient learning of deep networks from decentralized data . In Proc. of AISTATS. H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et\u00a0al. 2017. Communication-efficient learning of deep networks from decentralized data. In Proc. of AISTATS."},{"key":"e_1_2_1_80_1","volume-title":"Learning differentially private language models without losing accuracy. CoRR abs\/1710.06963","author":"McMahan H. Brendan","year":"2017","unstructured":"H. Brendan McMahan , Daniel Ramage , Kunal Talwar , and Li Zhang . 2017. Learning differentially private language models without losing accuracy. CoRR abs\/1710.06963 ( 2017 ). arxiv:1710.06963.http:\/\/arxiv.org\/abs\/1710.06963. H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2017. Learning differentially private language models without losing accuracy. CoRR abs\/1710.06963 (2017). arxiv:1710.06963.http:\/\/arxiv.org\/abs\/1710.06963."},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00029"},{"key":"e_1_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2018.2844341"},{"key":"e_1_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISGT.2010.5434752"},{"key":"e_1_2_1_84_1","volume-title":"Proc. of OSDI. 561\u2013577","author":"Moritz Philipp","year":"2018","unstructured":"Philipp Moritz , Robert Nishihara , Stephanie Wang , Alexey Tumanov , Richard Liaw , Eric Liang , Melih Elibol , Zongheng Yang , William Paul , Michael I Jordan , et\u00a0al. 2018 . Ray: A distributed framework for emerging applications . In Proc. of OSDI. 561\u2013577 . Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I Jordan, et\u00a0al. 2018. Ray: A distributed framework for emerging applications. In Proc. of OSDI. 561\u2013577."},{"key":"e_1_2_1_85_1","unstructured":"M. G. Sarwar Murshed Christopher Murphy Daqing Hou Nazar Khan Ganesh Ananthanarayanan and Faraz Hussain. 2019. Machine learning at the network edge: A survey. CoRR abs\/1908.00080. http:\/\/arxiv.org\/abs\/1908.00080.  M. G. Sarwar Murshed Christopher Murphy Daqing Hou Nazar Khan Ganesh Ananthanarayanan and Faraz Hussain. 2019. Machine learning at the network edge: A survey. CoRR abs\/1908.00080. http:\/\/arxiv.org\/abs\/1908.00080."},{"key":"e_1_2_1_86_1","doi-asserted-by":"publisher","DOI":"10.1145\/3299869.3319874"},{"key":"e_1_2_1_87_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00065"},{"key":"e_1_2_1_88_1","volume-title":"Reed","author":"Niknam Solmaz","year":"2019","unstructured":"Solmaz Niknam , Harpreet S. Dhillon , and Jeffery H . Reed . 2019 . Federated learning for wireless communications: Motivation , opportunities and challenges. arXiv:1908.06847. http:\/\/arxiv.org\/abs\/1908.06847. Solmaz Niknam, Harpreet S. Dhillon, and Jeffery H. Reed. 2019. Federated learning for wireless communications: Motivation, opportunities and challenges. arXiv:1908.06847. http:\/\/arxiv.org\/abs\/1908.06847."},{"key":"e_1_2_1_89_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICC.2019.8761315"},{"key":"e_1_2_1_90_1","unstructured":"Jianmin Chen Rajat Monga Samy Bengio and Rafal J\u00f3zefowicz. 2016. Revisiting Distributed Synchronous (SGD). CoRR abs\/1604.0098. http:\/\/arxiv.org\/abs\/1604.00981.  Jianmin Chen Rajat Monga Samy Bengio and Rafal J\u00f3zefowicz. 2016. Revisiting Distributed Synchronous (SGD). CoRR abs\/1604.0098. http:\/\/arxiv.org\/abs\/1604.00981."},{"key":"e_1_2_1_91_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.12"},{"key":"e_1_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.1109\/MNET.2017.1700030"},{"key":"e_1_2_1_93_1","doi-asserted-by":"publisher","DOI":"10.1145\/3362031"},{"key":"e_1_2_1_94_1","volume-title":"Proc. of NeurIPS. 2663\u20132671","author":"Roux Nicolas L.","unstructured":"Nicolas L. Roux , Mark Schmidt , and Francis R. Bach . 2012. A stochastic gradient method with an exponential convergence _rate for finite training sets . In Proc. of NeurIPS. 2663\u20132671 . Nicolas L. Roux, Mark Schmidt, and Francis R. Bach. 2012. A stochastic gradient method with an exponential convergence _rate for finite training sets. In Proc. of NeurIPS. 2663\u20132671."},{"key":"e_1_2_1_95_1","doi-asserted-by":"publisher","DOI":"10.1109\/LNET.2019.2947144"},{"key":"e_1_2_1_96_1","unstructured":"Felix Sattler Simon Wiedemann Klaus-Robert M\u00fcller and Wojciech Samek. 2019. Robust and communication-efficient federated learning from non-iid data. arXiv:1903.02891. http:\/\/arxiv.org\/abs\/1903.02891.  Felix Sattler Simon Wiedemann Klaus-Robert M\u00fcller and Wojciech Samek. 2019. Robust and communication-efficient federated learning from non-iid data. arXiv:1903.02891. http:\/\/arxiv.org\/abs\/1903.02891."},{"key":"e_1_2_1_97_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2016.2579198"},{"key":"e_1_2_1_98_1","volume-title":"2017 IEEE Symposium on Security and Privacy (SP\u201917)","author":"Smith A.","unstructured":"A. Smith , A. Thakurta , and J. Upadhyay . 2017. Is interaction necessary for distributed private learning? In 2017 IEEE Symposium on Security and Privacy (SP\u201917) . A. Smith, A. Thakurta, and J. Upadhyay. 2017. Is interaction necessary for distributed private learning? In 2017 IEEE Symposium on Security and Privacy (SP\u201917)."},{"key":"e_1_2_1_99_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134077"},{"key":"e_1_2_1_100_1","volume-title":"Proc","author":"Sprague Michael R.","unstructured":"Michael R. Sprague , Amir Jalalirad , Marco Scavuzzo , Catalin Capota , Moritz Neun , Lyman Do , and Michael Kopp . 2018. Asynchronous federated learning for geospatial applications . In Proc . of ECML PKDD. Springer , 21\u201328. Michael R. Sprague, Amir Jalalirad, Marco Scavuzzo, Catalin Capota, Moritz Neun, Lyman Do, and Michael Kopp. 2018. Asynchronous federated learning for geospatial applications. In Proc. of ECML PKDD. Springer, 21\u201328."},{"key":"e_1_2_1_101_1","volume-title":"Proc. of NeurIPS. 3517\u20133529","author":"Steinhardt Jacob","unstructured":"Jacob Steinhardt , Pang Wei W. Koh , and Percy S. Liang . 2017. Certified defenses for data poisoning attacks . In Proc. of NeurIPS. 3517\u20133529 . Jacob Steinhardt, Pang Wei W. Koh, and Percy S. Liang. 2017. Certified defenses for data poisoning attacks. In Proc. of NeurIPS. 3517\u20133529."},{"key":"e_1_2_1_102_1","volume-title":"Proc. of NeurIPS. 4447\u20134458","author":"Stich Sebastian U.","year":"2018","unstructured":"Sebastian U. Stich , Jean-Baptiste Cordonnier , and Martin Jaggi . 2018 . Sparsified SGD with memory . In Proc. of NeurIPS. 4447\u20134458 . Sebastian U. Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. 2018. Sparsified SGD with memory. In Proc. of NeurIPS. 4447\u20134458."},{"key":"e_1_2_1_103_1","volume-title":"Proc. of NeurIPS. 3365\u20133375","author":"Sun Jun","year":"2019","unstructured":"Jun Sun , Tianyi Chen , Georgios Giannakis , and Zaiyue Yang . 2019 . Communication-efficient distributed learning via lazily aggregated quantized gradients . In Proc. of NeurIPS. 3365\u20133375 . Jun Sun, Tianyi Chen, Georgios Giannakis, and Zaiyue Yang. 2019. Communication-efficient distributed learning via lazily aggregated quantized gradients. In Proc. of NeurIPS. 3365\u20133375."},{"key":"e_1_2_1_104_1","volume-title":"Proc. of ICML. JMLR. org, 3329\u20133337","author":"Suresh Ananda Theertha","year":"2017","unstructured":"Ananda Theertha Suresh , Felix X. Yu , Sanjiv Kumar , and H. Brendan McMahan . 2017 . Distributed mean estimation with limited communication . In Proc. of ICML. JMLR. org, 3329\u20133337 . Ananda Theertha Suresh, Felix X. Yu, Sanjiv Kumar, and H. Brendan McMahan. 2017. Distributed mean estimation with limited communication. In Proc. of ICML. JMLR. org, 3329\u20133337."},{"key":"e_1_2_1_105_1","volume-title":"Hinton","author":"Sutskever Ilya","year":"2013","unstructured":"Ilya Sutskever , James Martens , George E. Dahl , and Geoffrey E . Hinton . 2013 . On the importance of initialization and momentum in deep learning. In Proc. of ICML, Vol. 28 . JMLR. org, 1139\u20131147. Ilya Sutskever, James Martens, George E. Dahl, and Geoffrey E. Hinton. 2013. On the importance of initialization and momentum in deep learning. In Proc. of ICML, Vol. 28. JMLR.org, 1139\u20131147."},{"key":"e_1_2_1_106_1","volume-title":"Proc. of ICML","volume":"70","author":"Tandon Rashish","year":"2017","unstructured":"Rashish Tandon , Qi Lei , Alexandros G. Dimakis , and Nikos Karampatziakis . 2017 . Gradient coding: Avoiding stragglers in distributed learning . In Proc. of ICML , Vol. 70 . PMLR, 3368\u20133376. Rashish Tandon, Qi Lei, Alexandros G. Dimakis, and Nikos Karampatziakis. 2017. Gradient coding: Avoiding stragglers in distributed learning. In Proc. of ICML, Vol. 70. PMLR, 3368\u20133376."},{"key":"e_1_2_1_107_1","volume-title":"Proc. of NeurIPS. 7652\u20137662","author":"Tang Hanlin","year":"2018","unstructured":"Hanlin Tang , Shaoduo Gan , Ce Zhang , Tong Zhang , and Ji Liu . 2018 . Communication compression for decentralized training . In Proc. of NeurIPS. 7652\u20137662 . Hanlin Tang, Shaoduo Gan, Ce Zhang, Tong Zhang, and Ji Liu. 2018. Communication compression for decentralized training. In Proc. of NeurIPS. 7652\u20137662."},{"key":"e_1_2_1_108_1","volume-title":"Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression. arXiv:1905.05957","author":"Tang Hanlin","year":"2019","unstructured":"Hanlin Tang , Xiangru Lian , Tong Zhang , and Ji Liu . 2019 . Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression. arXiv:1905.05957 . http:\/\/arxiv.org\/abs\/1905.05957. Hanlin Tang, Xiangru Lian, Tong Zhang, and Ji Liu. 2019. Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression. arXiv:1905.05957. http:\/\/arxiv.org\/abs\/1905.05957."},{"key":"e_1_2_1_109_1","volume-title":"Attacks meet interpretability: Attribute-steered detection of adversarial samples. In Proc. of NeurIPS.","author":"Tao Guanhong","year":"2018","unstructured":"Guanhong Tao , Shiqing Ma , Yingqi Liu , and Xiangyu Zhang . 2018 . Attacks meet interpretability: Attribute-steered detection of adversarial samples. In Proc. of NeurIPS. Guanhong Tao, Shiqing Ma, Yingqi Liu, and Xiangyu Zhang. 2018. Attacks meet interpretability: Attribute-steered detection of adversarial samples. In Proc. of NeurIPS."},{"key":"e_1_2_1_110_1","volume-title":"Coursera: Neural networks for machine learning. Technical Report.","author":"Tieleman T.","year":"2017","unstructured":"T. Tieleman and G. Hinton . 2017 . Divide the gradient by a running average of its recent magnitude. Coursera: Neural networks for machine learning. Technical Report. T. Tieleman and G. Hinton. 2017. Divide the gradient by a running average of its recent magnitude. Coursera: Neural networks for machine learning. Technical Report."},{"key":"e_1_2_1_111_1","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM.2019.8737464"},{"key":"e_1_2_1_112_1","volume-title":"No Peek: A survey of private distributed deep learning. arxiv:1812.03288.http:\/\/arxiv.org\/abs\/1812.03288.","author":"Vepakomma Praneeth","year":"2018","unstructured":"Praneeth Vepakomma , Tristan Swedish , Ramesh Raskar , Otkrist Gupta , and Abhimanyu Dubey . 2018 . No Peek: A survey of private distributed deep learning. arxiv:1812.03288.http:\/\/arxiv.org\/abs\/1812.03288. Praneeth Vepakomma, Tristan Swedish, Ramesh Raskar, Otkrist Gupta, and Abhimanyu Dubey. 2018. No Peek: A survey of private distributed deep learning. arxiv:1812.03288.http:\/\/arxiv.org\/abs\/1812.03288."},{"key":"e_1_2_1_113_1","volume-title":"12th USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201916)","author":"Viswanathan Raajay","year":"2016","unstructured":"Raajay Viswanathan , Ganesh Ananthanarayanan , and Aditya Akella . 2016 . CLARINET: WAN-Aware optimization for analytics queries . In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201916) . 435\u2013450. Raajay Viswanathan, Ganesh Ananthanarayanan, and Aditya Akella. 2016. CLARINET: WAN-Aware optimization for analytics queries. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI\u201916). 435\u2013450."},{"key":"e_1_2_1_114_1","volume-title":"Proc. of NeurIPS. 14236\u201314245","author":"Vogels Thijs","year":"2019","unstructured":"Thijs Vogels , Sai Praneeth Karimireddy , and Martin Jaggi . 2019 . PowerSGD: Practical low-rank gradient compression for distributed optimization . In Proc. of NeurIPS. 14236\u201314245 . Thijs Vogels, Sai Praneeth Karimireddy, and Martin Jaggi. 2019. PowerSGD: Practical low-rank gradient compression for distributed optimization. In Proc. of NeurIPS. 14236\u201314245."},{"key":"e_1_2_1_115_1","volume-title":"Philip Brighten Godfrey, Thomas Jungblut, Jitu Padhye, and George Varghese.","author":"Vulimiri Ashish","year":"2015","unstructured":"Ashish Vulimiri , Carlo Curino , Philip Brighten Godfrey, Thomas Jungblut, Jitu Padhye, and George Varghese. 2015 . Global analytics in the face of bandwidth and regulatory constraints. In Proc. of NSDI. USENIX Association , 323\u2013336. Ashish Vulimiri, Carlo Curino, Philip Brighten Godfrey, Thomas Jungblut, Jitu Padhye, and George Varghese. 2015. Global analytics in the face of bandwidth and regulatory constraints. In Proc. of NSDI. USENIX Association, 323\u2013336."},{"key":"e_1_2_1_116_1","unstructured":"Benjamin W. Wah (Ed.). 2008. Wiley Encyclopedia of Computer Science and Engineering. John Wiley & Sons Inc.  Benjamin W. Wah (Ed.). 2008. Wiley Encyclopedia of Computer Science and Engineering. John Wiley & Sons Inc."},{"key":"e_1_2_1_117_1","volume-title":"Proc. of ICML","volume":"97","author":"Wang Di","year":"2019","unstructured":"Di Wang , Changyou Chen , and Jinhui Xu . 2019 . Differentially private empirical risk minimization with non-convex loss functions . In Proc. of ICML , Vol. 97 . PMLR, 6526\u20136535. Di Wang, Changyou Chen, and Jinhui Xu. 2019. Differentially private empirical risk minimization with non-convex loss functions. In Proc. of ICML, Vol. 97. PMLR, 6526\u20136535."},{"key":"e_1_2_1_118_1","volume-title":"Proc. of NeurIPS. 965\u2013974","author":"Wang Di","year":"2018","unstructured":"Di Wang , Marco Gaboardi , and Jinhui Xu . 2018 . Empirical risk minimization in non-interactive local differential privacy revisited . In Proc. of NeurIPS. 965\u2013974 . Di Wang, Marco Gaboardi, and Jinhui Xu. 2018. Empirical risk minimization in non-interactive local differential privacy revisited. In Proc. of NeurIPS. 965\u2013974."},{"key":"e_1_2_1_119_1","volume-title":"Proc. of NeurIPS. 2722\u20132731","author":"Wang Di","year":"2017","unstructured":"Di Wang , Minwei Ye , and Jinhui Xu . 2017 . Differentially private empirical risk minimization revisited: Faster and more general . In Proc. of NeurIPS. 2722\u20132731 . Di Wang, Minwei Ye, and Jinhui Xu. 2017. Differentially private empirical risk minimization revisited: Faster and more general. In Proc. of NeurIPS. 2722\u20132731."},{"key":"e_1_2_1_120_1","unstructured":"Haozhao Wang Zhihao Qu Song Guo Xin Gao Ruixuan Li and Baoliu Ye. 2020. Intermittent pulling with local compensation for communication-efficient federated learning. CoRR abs\/2001.08277. https:\/\/arxiv.org\/abs\/2001.08277.  Haozhao Wang Zhihao Qu Song Guo Xin Gao Ruixuan Li and Baoliu Ye. 2020. Intermittent pulling with local compensation for communication-efficient federated learning. CoRR abs\/2001.08277. https:\/\/arxiv.org\/abs\/2001.08277."},{"key":"e_1_2_1_121_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2017.06.003"},{"key":"e_1_2_1_122_1","volume-title":"Proc. of NeurIPS. 4238\u20134248","author":"Wang Songtao","year":"2018","unstructured":"Songtao Wang , Dan Li , Yang Cheng , Jinkun Geng , Yanshu Wang , Shuai Wang , Shu-Tao Xia , and Jianping Wu . 2018 . BML: A high-performance, low-cost gradient synchronization algorithm for dml training . In Proc. of NeurIPS. 4238\u20134248 . Songtao Wang, Dan Li, Yang Cheng, Jinkun Geng, Yanshu Wang, Shuai Wang, Shu-Tao Xia, and Jianping Wu. 2018. BML: A high-performance, low-cost gradient synchronization algorithm for dml training. In Proc. of NeurIPS. 4238\u20134248."},{"key":"e_1_2_1_123_1","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM.2018.8486403"},{"key":"e_1_2_1_124_1","volume-title":"Proc. of NeurIPS. 1299\u20131309","author":"Wangni Jianqiao","year":"2018","unstructured":"Jianqiao Wangni , Jialei Wang , Ji Liu , and Tong Zhang . 2018 . Gradient sparsification for communication-efficient distributed optimization . In Proc. of NeurIPS. 1299\u20131309 . Jianqiao Wangni, Jialei Wang, Ji Liu, and Tong Zhang. 2018. Gradient sparsification for communication-efficient distributed optimization. In Proc. of NeurIPS. 1299\u20131309."},{"key":"e_1_2_1_125_1","volume-title":"Proc. of NeurIPS. 1509\u20131519","author":"Wen Wei","year":"2017","unstructured":"Wei Wen , Cong Xu , Feng Yan , Chunpeng Wu , Yandan Wang , Yiran Chen , and Hai Li . 2017 . Terngrad: Ternary gradients to reduce communication in distributed deep learning . In Proc. of NeurIPS. 1509\u20131519 . Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. 2017. Terngrad: Ternary gradients to reduce communication in distributed deep learning. In Proc. of NeurIPS. 1509\u20131519."},{"key":"e_1_2_1_126_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2019.00048"},{"key":"e_1_2_1_127_1","unstructured":"Jiaxiang Wu Weidong Huang Junzhou Huang and Tong Zhang. 2018. Error compensated quantized SGD and its applications to large-scale distributed optimization. arXiv:1806.08054. http:\/\/arxiv.org\/abs\/1806.08054.  Jiaxiang Wu Weidong Huang Junzhou Huang and Tong Zhang. 2018. Error compensated quantized SGD and its applications to large-scale distributed optimization. arXiv:1806.08054. http:\/\/arxiv.org\/abs\/1806.08054."},{"key":"e_1_2_1_128_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2017.2737968"},{"key":"e_1_2_1_129_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2017.2687918"},{"key":"e_1_2_1_130_1","volume-title":"Zeno: Byzantine-suspicious stochastic gradient descent. CoRR abs\/1805.10032","author":"Xie Cong","year":"2018","unstructured":"Cong Xie , Oluwasanmi Koyejo , and Indranil Gupta . 2018 . Zeno: Byzantine-suspicious stochastic gradient descent. CoRR abs\/1805.10032 (2018). arxiv:1805.10032.http:\/\/arxiv.org\/abs\/1805.10032. Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. 2018. Zeno: Byzantine-suspicious stochastic gradient descent. CoRR abs\/1805.10032 (2018). arxiv:1805.10032.http:\/\/arxiv.org\/abs\/1805.10032."},{"key":"e_1_2_1_131_1","doi-asserted-by":"publisher","DOI":"10.1016\/J.ENG.2016.02.008"},{"key":"e_1_2_1_132_1","volume-title":"Zhijie Deng, Qirong Ho, Guangwen Yang, and Eric P. Xing.","author":"Xu Shizhen","year":"2018","unstructured":"Shizhen Xu , Hao Zhang , Graham Neubig , Wei Dai , Jin Kyu Kim , Zhijie Deng, Qirong Ho, Guangwen Yang, and Eric P. Xing. 2018 . Cavs : An efficient runtime system for dynamic neural networks. In Proc. of USENIX ATC. 937\u2013950. Shizhen Xu, Hao Zhang, Graham Neubig, Wei Dai, Jin Kyu Kim, Zhijie Deng, Qirong Ho, Guangwen Yang, and Eric P. Xing. 2018. Cavs: An efficient runtime system for dynamic neural networks. In Proc. of USENIX ATC. 937\u2013950."},{"key":"e_1_2_1_133_1","doi-asserted-by":"publisher","DOI":"10.1145\/2348543.2348567"},{"key":"e_1_2_1_134_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNET.2015.2421897"},{"key":"e_1_2_1_135_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCOMM.2019.2944169"},{"key":"e_1_2_1_136_1","doi-asserted-by":"publisher","DOI":"10.1145\/3339474"},{"key":"e_1_2_1_137_1","unstructured":"Dong Yin Yudong Chen Kannan Ramchandran and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. arXiv:1803.01498. http:\/\/arxiv.org\/abs\/1803.01498.  Dong Yin Yudong Chen Kannan Ramchandran and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. arXiv:1803.01498. http:\/\/arxiv.org\/abs\/1803.01498."},{"key":"e_1_2_1_138_1","volume-title":"Bartlett","author":"Yin Dong","year":"2018","unstructured":"Dong Yin , Yudong Chen , Kannan Ramchandran , and Peter L . Bartlett . 2018 . Defending against saddle point attack in byzantine-robust distributed learning. CoRR abs\/1806.05358 (2018). arxiv:1806.05358.http:\/\/arxiv.org\/abs\/1806.05358. Dong Yin, Yudong Chen, Kannan Ramchandran, and Peter L. Bartlett. 2018. Defending against saddle point attack in byzantine-robust distributed learning. CoRR abs\/1806.05358 (2018). arxiv:1806.05358.http:\/\/arxiv.org\/abs\/1806.05358."},{"key":"e_1_2_1_139_1","unstructured":"Ryo Yonetani Tomohiro Takahashi Atsushi Hashimoto and Yoshitaka Ushiku. 2019. Decentralized learning of generative adversarial networks from multi-client non-iid data. arXiv:1905.09684. http:\/\/arxiv.org\/abs\/1905.09684.  Ryo Yonetani Tomohiro Takahashi Atsushi Hashimoto and Yoshitaka Ushiku. 2019. Decentralized learning of generative adversarial networks from multi-client non-iid data. arXiv:1905.09684. http:\/\/arxiv.org\/abs\/1905.09684."},{"key":"e_1_2_1_140_1","unstructured":"Naoya Yoshida Takayuki Nishio Masahiro Morikura Koji Yamamoto and Ryo Yonetani. 2019. Hybrid-FL: Cooperative learning mechanism using Non-IID data in wireless networks. arXiv:1905.07210. http:\/\/arxiv.org\/abs\/1905.07210.  Naoya Yoshida Takayuki Nishio Masahiro Morikura Koji Yamamoto and Ryo Yonetani. 2019. Hybrid-FL: Cooperative learning mechanism using Non-IID data in wireless networks. arXiv:1905.07210. http:\/\/arxiv.org\/abs\/1905.07210."},{"key":"e_1_2_1_141_1","volume-title":"Proc. of NeurIPS. 4440\u20134451","author":"Yu Yue","year":"2019","unstructured":"Yue Yu , Jiaxiang Wu , and Longbo Huang . 2019 . Double quantization for communication-efficient distributed optimization . In Proc. of NeurIPS. 4440\u20134451 . Yue Yu, Jiaxiang Wu, and Longbo Huang. 2019. Double quantization for communication-efficient distributed optimization. In Proc. of NeurIPS. 4440\u20134451."},{"key":"e_1_2_1_142_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2013.18"},{"key":"e_1_2_1_143_1","doi-asserted-by":"publisher","DOI":"10.1109\/TC.2020.2969148"},{"key":"e_1_2_1_144_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2020.2967772"},{"key":"e_1_2_1_145_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2019.2927314"},{"key":"e_1_2_1_146_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.comnet.2017.12.013"},{"key":"e_1_2_1_147_1","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM41043.2020.9155268"},{"key":"e_1_2_1_148_1","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2019.2904897"},{"key":"e_1_2_1_149_1","volume-title":"Proc. of ICML. JMLR. org, 4035\u20134043","author":"Zhang Hantian","year":"2017","unstructured":"Hantian Zhang , Jerry Li , Kaan Kara , Dan Alistarh , Ji Liu , and Ce Zhang . 2017 . Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning . In Proc. of ICML. JMLR. org, 4035\u20134043 . Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, and Ce Zhang. 2017. Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning. In Proc. of ICML. JMLR. org, 4035\u20134043."},{"key":"e_1_2_1_150_1","volume-title":"Xing","author":"Zhang Hao","year":"2017","unstructured":"Hao Zhang , Zeyu Zheng , Shizhen Xu , Wei Dai , Qirong Ho , Xiaodan Liang , Zhiting Hu , Jinliang Wei , Pengtao Xie , and Eric P . Xing . 2017 . Poseidon : An efficient communication architecture for distributed deep learning on clusters. In Proc. of USENIX ATC. 181\u2013193. Hao Zhang, Zeyu Zheng, Shizhen Xu, Wei Dai, Qirong Ho, Xiaodan Liang, Zhiting Hu, Jinliang Wei, Pengtao Xie, and Eric P. Xing. 2017. Poseidon: An efficient communication architecture for distributed deep learning on clusters. In Proc. of USENIX ATC. 181\u2013193."},{"key":"e_1_2_1_151_1","volume-title":"Proc. of USENIX ATC. 951\u2013965","author":"Zhang Minjia","year":"2018","unstructured":"Minjia Zhang , Samyam Rajbhandari , Wenhan Wang , and Yuxiong He . 2018 . DeepCPU: Serving RNN-based deep learning models 10 faster . In Proc. of USENIX ATC. 951\u2013965 . Minjia Zhang, Samyam Rajbhandari, Wenhan Wang, and Yuxiong He. 2018. DeepCPU: Serving RNN-based deep learning models 10 faster. In Proc. of USENIX ATC. 951\u2013965."},{"key":"e_1_2_1_152_1","doi-asserted-by":"publisher","DOI":"10.1109\/TC.2015.2470255"},{"key":"e_1_2_1_153_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSAC.2015.2435356"},{"key":"e_1_2_1_154_1","volume-title":"Proc. of AAAI (AAAI Workshops)","volume":"18","author":"Zhao Jun","year":"2018","unstructured":"Jun Zhao . 2018 . Distributed deep learning under differential privacy with the teacher-student paradigm . In Proc. of AAAI (AAAI Workshops) , Vol. WS- 18 . 404\u2013408. Jun Zhao. 2018. Distributed deep learning under differential privacy with the teacher-student paradigm. In Proc. of AAAI (AAAI Workshops), Vol. WS-18. 404\u2013408."},{"key":"e_1_2_1_155_1","unstructured":"Tian Zhao Yaqi Zhang and Kunle Olukotun. 2019. Serving recurrent neural networks efficiently with a spatial accelerator. arxiv:1909.13654.http:\/\/arxiv.org\/abs\/1909.13654.  Tian Zhao Yaqi Zhang and Kunle Olukotun. 2019. Serving recurrent neural networks efficiently with a spatial accelerator. arxiv:1909.13654.http:\/\/arxiv.org\/abs\/1909.13654."},{"key":"e_1_2_1_156_1","unstructured":"Yue Zhao Meng Li Liangzhen Lai Naveen Suda Damon Civin and Vikas Chandra. 2018. Federated learning with Non-IID data. arXiv:1806.00582. http:\/\/arxiv.org\/abs\/1806.00582.  Yue Zhao Meng Li Liangzhen Lai Naveen Suda Damon Civin and Vikas Chandra. 2018. Federated learning with Non-IID data. arXiv:1806.00582. http:\/\/arxiv.org\/abs\/1806.00582."},{"key":"e_1_2_1_157_1","volume-title":"Proc. of ICML","volume":"70","author":"Zheng Kai","year":"2017","unstructured":"Kai Zheng , Wenlong Mou , and Liwei Wang . 2017 . Collect at once, use effectively: Making non-interactive locally private learning possible . In Proc. of ICML , Vol. 70 . PMLR, 4130\u20134139. Kai Zheng, Wenlong Mou, and Liwei Wang. 2017. Collect at once, use effectively: Making non-interactive locally private learning possible. In Proc. of ICML, Vol. 70. PMLR, 4130\u20134139."},{"key":"e_1_2_1_158_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2019.2918951"},{"key":"e_1_2_1_159_1","volume-title":"Proc. of NeurIPS. 2595\u20132603","author":"Zinkevich Martin","unstructured":"Martin Zinkevich , Markus Weimer , Lihong Li , and Alex J. Smola . 2010. Parallelized stochastic gradient descent . In Proc. of NeurIPS. 2595\u20132603 . Martin Zinkevich, Markus Weimer, Lihong Li, and Alex J. Smola. 2010. Parallelized stochastic gradient descent. In Proc. of NeurIPS. 2595\u20132603."}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3464419","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3464419","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:17:10Z","timestamp":1750191430000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3464419"}},"subtitle":["The Enabling Technology for Distributed Big Data Analytics in the Edge"],"short-title":[],"issued":{"date-parts":[[2021,7,18]]},"references-count":159,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2022,9,30]]}},"alternative-id":["10.1145\/3464419"],"URL":"https:\/\/doi.org\/10.1145\/3464419","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,7,18]]},"assertion":[{"value":"2020-03-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-05-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-07-18","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}