{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,5,14]],"date-time":"2025-05-14T02:39:30Z","timestamp":1747190370531,"version":"3.40.5"},"reference-count":40,"publisher":"Wiley","license":[{"start":{"date-parts":[[2020,11,16]],"date-time":"2020-11-16T00:00:00Z","timestamp":1605484800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Complexity"],"published-print":{"date-parts":[[2020,11,16]]},"abstract":"<jats:p>Most current online distributed machine learning algorithms have been studied in a data-parallel architecture among agents in networks. We study online distributed machine learning from a different perspective, where the features about the same samples are observed by multiple agents that wish to collaborate but do not exchange the raw data with each other. We propose a distributed feature online gradient descent algorithm and prove that local solution converges to the global minimizer with a sublinear rate <jats:inline-formula>\n                     <a:math xmlns:a=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" id=\"M1\">\n                        <a:mi mathvariant=\"normal\">O<\/a:mi>\n                        <a:mfenced open=\"(\" close=\")\" separators=\"|\">\n                           <a:mrow>\n                              <a:msqrt>\n                                 <a:mrow>\n                                    <a:mn>2<\/a:mn>\n                                    <a:mi>T<\/a:mi>\n                                 <\/a:mrow>\n                              <\/a:msqrt>\n                           <\/a:mrow>\n                        <\/a:mfenced>\n                     <\/a:math>\n                  <\/jats:inline-formula>. Our algorithm does not require exchange of the primal data or even the model parameters between agents. Firstly, we design an auxiliary variable, which implies the information of the global features, and estimate at each agent by dynamic consensus method. Then, local parameters are updated by online gradient descent method based on local data stream. Simulations illustrate the performance of the proposed algorithm.<\/jats:p>","DOI":"10.1155\/2020\/8830359","type":"journal-article","created":{"date-parts":[[2020,11,17]],"date-time":"2020-11-17T00:50:08Z","timestamp":1605574208000},"page":"1-10","source":"Crossref","is-referenced-by-count":0,"title":["Online Supervised Learning with Distributed Features over Multiagent System"],"prefix":"10.1155","volume":"2020","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6212-4429","authenticated-orcid":true,"given":"Xibin","family":"An","sequence":"first","affiliation":[{"name":"High-Tech Institute of Xi\u2019an, Xi\u2019an 710025, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2451-1142","authenticated-orcid":true,"given":"Bing","family":"He","sequence":"additional","affiliation":[{"name":"High-Tech Institute of Xi\u2019an, Xi\u2019an 710025, China"}]},{"given":"Chen","family":"Hu","sequence":"additional","affiliation":[{"name":"High-Tech Institute of Xi\u2019an, Xi\u2019an 710025, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3728-1830","authenticated-orcid":true,"given":"Bingqi","family":"Liu","sequence":"additional","affiliation":[{"name":"High-Tech Institute of Xi\u2019an, Xi\u2019an 710025, China"}]}],"member":"311","reference":[{"first-page":"2232","article-title":"FDML: a collaborative machine learning framework for distributed features","author":"Y. Hu","key":"1"},{"author":"A. T. Vu","key":"2","article-title":"Distributed adaptive model rules for mining big data streams"},{"key":"3","doi-asserted-by":"publisher","DOI":"10.1155\/2017\/4978613"},{"key":"4","doi-asserted-by":"publisher","DOI":"10.1109\/tsp.2014.2367458"},{"key":"5","doi-asserted-by":"publisher","DOI":"10.1109\/TCYB.2019.2963172"},{"key":"6","doi-asserted-by":"publisher","DOI":"10.1109\/TCSI.2020.2975383"},{"key":"7","doi-asserted-by":"publisher","DOI":"10.1007\/s11424-018-7265-y"},{"key":"8","doi-asserted-by":"publisher","DOI":"10.1109\/TCYB.2020.2972403"},{"key":"9","doi-asserted-by":"publisher","DOI":"10.1155\/2020\/7685460"},{"key":"10","doi-asserted-by":"publisher","DOI":"10.1002\/rnc.4941"},{"first-page":"2512","article-title":"Diffusion gradient boosting for networked learning","author":"B. Ying","key":"11"},{"key":"12","doi-asserted-by":"publisher","DOI":"10.1109\/tsp.2018.2881661"},{"key":"13","doi-asserted-by":"publisher","DOI":"10.1007\/s11424-017-6273-7"},{"key":"14","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2017.2686377"},{"key":"15","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2006.1657817"},{"article-title":"On distributed online convex optimization with sublinear dynamic regret and fit","year":"2020","author":"P. Sharma","key":"16"},{"key":"17","unstructured":"MurdopoA.Distributed decision tree learning for mining big data streams2013Dresden, GermanyEuropean Master in Distributed ComputingMaster of Science Thesis"},{"key":"18","doi-asserted-by":"publisher","DOI":"10.1561\/2200000018"},{"issue":"11","key":"19","doi-asserted-by":"crossref","first-page":"3045","DOI":"10.1109\/TCYB.2017.2755720","article-title":"An adaptive primal-dual subgradient algorithm for online distributed constrained optimization","volume":"48","author":"D. Yuan","year":"2017","journal-title":"IEEE Transactions on Cybernetics"},{"key":"20","doi-asserted-by":"publisher","DOI":"10.1109\/tsp.2014.2385045"},{"key":"21","doi-asserted-by":"publisher","DOI":"10.1016\/j.automatica.2017.07.003"},{"issue":"1","key":"22","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1561\/2200000016","article-title":"Distributed optimization and statistical learning via the alternating direction method of multipliers","volume":"3","author":"S. Boyd","year":"2011","journal-title":"Foundations and Trends in Machine Learning"},{"article-title":"Learning privately over distributed features: an ADMM sharing approach","year":"2019","author":"Y. Hu","key":"23"},{"issue":"32","key":"24","first-page":"298","article-title":"Algebraic graph theory","volume":"207","author":"G. C. Rota","year":"1994","journal-title":"Graduate Texts in Mathematics"},{"first-page":"4177","article-title":"Distributed subgradient methods and quantization effects","author":"A. Nedic","key":"25"},{"key":"26","doi-asserted-by":"publisher","DOI":"10.1109\/tsp.2014.2304432"},{"key":"27","doi-asserted-by":"publisher","DOI":"10.1109\/tac.2014.2308612"},{"key":"28","doi-asserted-by":"publisher","DOI":"10.1137\/14096668x"},{"article-title":"Exact diffusion for distributed optimization and learning\u2013part I: algorithm development","year":"2017","author":"K. Yuan","key":"29"},{"key":"30","doi-asserted-by":"publisher","DOI":"10.1016\/j.automatica.2009.10.021"},{"key":"31","doi-asserted-by":"publisher","DOI":"10.1561\/2200000051"},{"key":"32","doi-asserted-by":"publisher","DOI":"10.1109\/tac.2008.2009515"},{"key":"33","doi-asserted-by":"publisher","DOI":"10.1109\/jstsp.2011.2127446"},{"first-page":"338","article-title":"Stability and convergence properties of dynamic average consensus estimators","author":"R. A. Freeman","key":"34"},{"first-page":"449","article-title":"Tracking slowly moving clairvoyant: optimal dynamic regret of online learning with true and noisy gradient","author":"T. Yang","key":"35"},{"issue":"1","key":"36","first-page":"7942","article-title":"SGDLibrary: a MATLAB library for stochastic optimization algorithms","volume":"18","author":"H. Kasai","year":"2017","journal-title":"The Journal of Machine Learning Research"},{"article-title":"Dadam: a consensus-based distributed adaptive gradient method for online optimization","year":"2019","author":"P. Nazari","key":"37"},{"key":"38","first-page":"6","article-title":"Online algorithms and stochastic approximations","volume":"5","author":"D. Saad","year":"1998","journal-title":"Online Learning"},{"first-page":"2061","article-title":"Stochastic gradient boosted distributed decision trees","author":"J. Ye","key":"39"},{"key":"40","doi-asserted-by":"publisher","DOI":"10.23919\/TMA.2019.8784565"}],"container-title":["Complexity"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/complexity\/2020\/8830359.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/complexity\/2020\/8830359.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/complexity\/2020\/8830359.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2020,11,17]],"date-time":"2020-11-17T00:50:44Z","timestamp":1605574244000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.hindawi.com\/journals\/complexity\/2020\/8830359\/"}},"subtitle":[],"editor":[{"given":"Ning","family":"Cai","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2020,11,16]]},"references-count":40,"alternative-id":["8830359","8830359"],"URL":"https:\/\/doi.org\/10.1155\/2020\/8830359","relation":{},"ISSN":["1099-0526","1076-2787"],"issn-type":[{"type":"electronic","value":"1099-0526"},{"type":"print","value":"1076-2787"}],"subject":[],"published":{"date-parts":[[2020,11,16]]}}}