{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,7]],"date-time":"2026-02-07T11:16:10Z","timestamp":1770462970150,"version":"3.49.0"},"publisher-location":"New York, NY, USA","reference-count":63,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,6,9]],"date-time":"2022-06-09T00:00:00Z","timestamp":1654732800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,6,9]]},"DOI":"10.1145\/3519935.3519971","type":"proceedings-article","created":{"date-parts":[[2022,6,10]],"date-time":"2022-06-10T15:29:32Z","timestamp":1654874972000},"page":"529-542","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["Improved iteration complexities for overconstrained\n            <i>p<\/i>\n            -norm regression"],"prefix":"10.1145","author":[{"given":"Arun","family":"Jambulapati","sequence":"first","affiliation":[{"name":"Stanford University, USA"}]},{"given":"Yang P.","family":"Liu","sequence":"additional","affiliation":[{"name":"Stanford University, USA"}]},{"given":"Aaron","family":"Sidford","sequence":"additional","affiliation":[{"name":"Stanford University, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,6,10]]},"reference":[{"key":"e_1_3_2_1_1_1","first-page":"1","article-title":"Almost-Linear-Time Weighted \u2113 _p-Norm Solvers in Slightly Dense Graphs via Sparsification. In ICALP (LIPIcs, Vol. 198)","volume":"9","author":"Adil Deeksha","year":"2021","unstructured":"Deeksha Adil , Brian Bullins , Rasmus Kyng , and Sushant Sachdeva . 2021 . Almost-Linear-Time Weighted \u2113 _p-Norm Solvers in Slightly Dense Graphs via Sparsification. In ICALP (LIPIcs, Vol. 198) . Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik , 9 : 1 \u2013 9 :15. Deeksha Adil, Brian Bullins, Rasmus Kyng, and Sushant Sachdeva. 2021. Almost-Linear-Time Weighted \u2113 _p-Norm Solvers in Slightly Dense Graphs via Sparsification. In ICALP (LIPIcs, Vol. 198). Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik, 9:1\u20139:15.","journal-title":"Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik"},{"key":"e_1_3_2_1_2_1","unstructured":"Deeksha Adil Brian Bullins and Sushant Sachdeva. 2021. Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization. arXiv preprint arXiv:2107.02432.  Deeksha Adil Brian Bullins and Sushant Sachdeva. 2021. Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization. arXiv preprint arXiv:2107.02432."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611975482.86"},{"key":"e_1_3_2_1_4_1","volume-title":"Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019","author":"Adil Deeksha","year":"2019","unstructured":"Deeksha Adil , Richard Peng , and Sushant Sachdeva . 2019 . Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression . In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 , NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d\u2019Alch\u00e9-Buc, Emily B. Fox, and Roman Garnett (Eds.). 14166\u201314177. http:\/\/papers.nips.cc\/paper\/9565-fast-provably-convergent-irls-algorithm-for-p-norm-linear-regression Deeksha Adil, Richard Peng, and Sushant Sachdeva. 2019. Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d\u2019Alch\u00e9-Buc, Emily B. Fox, and Roman Garnett (Eds.). 14166\u201314177. http:\/\/papers.nips.cc\/paper\/9565-fast-provably-convergent-irls-algorithm-for-p-norm-linear-regression"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611975994.54"},{"key":"e_1_3_2_1_6_1","volume-title":"Conference On Learning Theory. 774\u2013792","author":"Agarwal Naman","year":"2018","unstructured":"Naman Agarwal and Elad Hazan . 2018 . Lower bounds for higher-order convex optimization . In Conference On Learning Theory. 774\u2013792 . Naman Agarwal and Elad Hazan. 2018. Lower bounds for higher-order convex optimization. In Conference On Learning Theory. 774\u2013792."},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10107-018-1293-1"},{"key":"e_1_3_2_1_8_1","volume-title":"Circulation Control for Faster Minimum Cost Flow in Unit-Capacity Graphs. In 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020","author":"Axiotis Kyriakos","year":"2020","unstructured":"Kyriakos Axiotis , Aleksander M\u0105dry , and Adrian Vladu . 2020 . Circulation Control for Faster Minimum Cost Flow in Unit-Capacity Graphs. In 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020 , Durham, NC, USA , November 16-19, 2020. 93\u2013104. Kyriakos Axiotis, Aleksander M\u0105dry, and Adrian Vladu. 2020. Circulation Control for Faster Minimum Cost Flow in Unit-Capacity Graphs. In 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020, Durham, NC, USA, November 16-19, 2020. 93\u2013104."},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1214\/09-EJS521"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611975994.16"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3406325.3451108"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/FOCS46700.2020.00090"},{"key":"e_1_3_2_1_13_1","volume-title":"Aaron Sidford, and Zhao Song.","author":"van den Brand Jan","year":"2020","unstructured":"Jan van den Brand , Yin Tat Lee , Aaron Sidford, and Zhao Song. 2020 . Solving Tall Dense Linear Programs in Nearly Linear Time. In STOC. arxiv:2002.02304 Jan van den Brand, Yin Tat Lee, Aaron Sidford, and Zhao Song. 2020. Solving Tall Dense Linear Programs in Nearly Linear Time. In STOC. arxiv:2002.02304"},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3188745.3188776"},{"key":"e_1_3_2_1_15_1","volume-title":"Complexity of Highly Parallel Non-Smooth Convex Optimization. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019","author":"Bubeck S\u00e9bastien","year":"2019","unstructured":"S\u00e9bastien Bubeck , Qijia Jiang , Yin Tat Lee , Yuanzhi Li , and Aaron Sidford . 2019 . Complexity of Highly Parallel Non-Smooth Convex Optimization. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 , NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d\u2019Alch\u00e9-Buc, Emily B. Fox, and Roman Garnett (Eds.). 13900\u201313909. S\u00e9bastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, and Aaron Sidford. 2019. Complexity of Highly Parallel Non-Smooth Convex Optimization. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d\u2019Alch\u00e9-Buc, Emily B. Fox, and Roman Garnett (Eds.). 13900\u201313909."},{"key":"e_1_3_2_1_16_1","unstructured":"Brian Bullins. 2018. Fast minimization of structured convex quartics. arXiv preprint arXiv:1812.10349.  Brian Bullins. 2018. Fast minimization of structured convex quartics. arXiv preprint arXiv:1812.10349."},{"key":"e_1_3_2_1_17_1","unstructured":"Brian Bullins and Richard Peng. 2019. Higher-order accelerated methods for faster non-smooth optimization. arXiv preprint arXiv:1906.01621.  Brian Bullins and Richard Peng. 2019. Higher-order accelerated methods for faster non-smooth optimization. arXiv preprint arXiv:1906.01621."},{"key":"e_1_3_2_1_18_1","volume-title":"Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020","author":"Carmon Yair","year":"2020","unstructured":"Yair Carmon , Arun Jambulapati , Qijia Jiang , Yujia Jin , Yin Tat Lee , Aaron Sidford , and Kevin Tian . 2020 . Acceleration with a Ball Optimization Oracle . In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 , NeurIPS 2020, December 6-12, 2020, virtual. Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, Aaron Sidford, and Kevin Tian. 2020. Acceleration with a Ball Optimization Oracle. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual."},{"key":"e_1_3_2_1_19_1","unstructured":"Yair Carmon Arun Jambulapati Yujia Jin and Aaron Sidford. 2021. Thinking inside the ball: Near-optimal minimization of the maximal loss. arXiv preprint arXiv:2105.01778.  Yair Carmon Arun Jambulapati Yujia Jin and Aaron Sidford. 2021. Thinking inside the ball: Near-optimal minimization of the maximal loss. arXiv preprint arXiv:2105.01778."},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/2422436.2422469"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/1993636.1993674"},{"key":"e_1_3_2_1_22_1","volume-title":"International Conference on Machine Learning. 1262\u20131271","author":"Clarkson Kenneth","year":"2019","unstructured":"Kenneth Clarkson , Ruosong Wang , and David Woodruff . 2019 . Dimensionality reduction for tukey regression . In International Conference on Machine Learning. 1262\u20131271 . Kenneth Clarkson, Ruosong Wang, and David Woodruff. 2019. Dimensionality reduction for tukey regression. In International Conference on Machine Learning. 1262\u20131271."},{"key":"e_1_3_2_1_23_1","unstructured":"Kenneth L Clarkson. 2005. Subgradient and sampling algorithms for l 1 regression.  Kenneth L Clarkson. 2005. Subgradient and sampling algorithms for l 1 regression."},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1137\/140963698"},{"key":"e_1_3_2_1_25_1","volume-title":"Woodruff","author":"Clarkson Kenneth L.","year":"2013","unstructured":"Kenneth L. Clarkson and David P . Woodruff . 2013 . Low rank approximation and regression in input sparsity time. In STOC. ACM , 81\u201390. Kenneth L. Clarkson and David P. Woodruff. 2013. Low rank approximation and regression in input sparsity time. In STOC. ACM, 81\u201390."},{"key":"e_1_3_2_1_26_1","volume-title":"Conference on Learning Theory, COLT 2019","author":"Cohen Michael B.","year":"2019","unstructured":"Michael B. Cohen , Ben Cousins , Yin Tat Lee , and Xin Yang . 2019 . A near-optimal algorithm for approximating the John Ellipsoid . In Conference on Learning Theory, COLT 2019 , 25-28 June 2019, Phoenix, AZ, USA. 849\u2013873. Michael B. Cohen, Ben Cousins, Yin Tat Lee, and Xin Yang. 2019. A near-optimal algorithm for approximating the John Ellipsoid. In Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA. 849\u2013873."},{"key":"e_1_3_2_1_27_1","volume-title":"Yin Tat Lee, and Zhao Song","author":"Cohen Michael B","year":"2019","unstructured":"Michael B Cohen , Yin Tat Lee, and Zhao Song . 2019 . Solving Linear Programs in the Current Matrix Multiplication Time. In STOC. arxiv:1810.07896 Michael B Cohen, Yin Tat Lee, and Zhao Song. 2019. Solving Linear Programs in the Current Matrix Multiplication Time. In STOC. arxiv:1810.07896"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.5555\/3039686.3039734"},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/2746539.2746567"},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1137\/070696507"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.5555\/2503308.2503352"},{"key":"e_1_3_2_1_32_1","volume-title":"Conference On Learning Theory, COLT 2018","author":"Durfee David","year":"2018","unstructured":"David Durfee , Kevin A. Lai , and Saurabh Sawlani . 2018 . \u2113 _1 Regression using Lewis Weights Preconditioning and Stochastic Gradient Descent . In Conference On Learning Theory, COLT 2018 , Stockholm, Sweden , 6-9 July 2018, S\u00e9bastien Bubeck, Vianney Perchet, and Philippe Rigollet (Eds.) (Proceedings of Machine Learning Research, Vol. 75). PMLR, 1626\u20131656. http:\/\/proceedings.mlr.press\/v75\/durfee18a.html David Durfee, Kevin A. Lai, and Saurabh Sawlani. 2018. \u2113 _1 Regression using Lewis Weights Preconditioning and Stochastic Gradient Descent. In Conference On Learning Theory, COLT 2018, Stockholm, Sweden, 6-9 July 2018, S\u00e9bastien Bubeck, Vianney Perchet, and Philippe Rigollet (Eds.) (Proceedings of Machine Learning Research, Vol. 75). PMLR, 1626\u20131656. http:\/\/proceedings.mlr.press\/v75\/durfee18a.html"},{"key":"e_1_3_2_1_33_1","volume-title":"International Conference on Machine Learning. 1794\u20131801","author":"Ene Alina","year":"2019","unstructured":"Alina Ene and Adrian Vladu . 2019 . Improved Convergence for \u2113 _1 and \u2113 _\u221e Regression via Iteratively Reweighted Least Squares . In International Conference on Machine Learning. 1794\u20131801 . Alina Ene and Adrian Vladu. 2019. Improved Convergence for \u2113 _1 and \u2113 _\u221e Regression via Iteratively Reweighted Least Squares. In International Conference on Machine Learning. 1794\u20131801."},{"key":"e_1_3_2_1_34_1","volume-title":"Conference on Learning Theory, COLT 2019","volume":"1393","author":"Gasnikov Alexander V.","year":"2019","unstructured":"Alexander V. Gasnikov , Pavel E. Dvurechensky , Eduard A. Gorbunov , Evgeniya A. Vorontsova , Daniil Selikhanovych , C\u00e9sar A. Uribe , Bo Jiang , Haoyue Wang , Shuzhong Zhang , S\u00e9bastien Bubeck , Qijia Jiang , Yin Tat Lee , Yuanzhi Li , and Aaron Sidford . 2019 . Near Optimal Methods for Minimizing Convex Functions with Lipschitz $ p$ -th Derivatives . In Conference on Learning Theory, COLT 2019 , 25-28 June 2019, Phoenix, AZ, USA, Alina Beygelzimer and Daniel Hsu (Eds.) (Proceedings of Machine Learning Research , Vol. 99). PMLR, 1392\u2013 1393 . Alexander V. Gasnikov, Pavel E. Dvurechensky, Eduard A. Gorbunov, Evgeniya A. Vorontsova, Daniil Selikhanovych, C\u00e9sar A. Uribe, Bo Jiang, Haoyue Wang, Shuzhong Zhang, S\u00e9bastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, and Aaron Sidford. 2019. Near Optimal Methods for Minimizing Convex Functions with Lipschitz $ p$ -th Derivatives. In Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA, Alina Beygelzimer and Daniel Hsu (Eds.) (Proceedings of Machine Learning Research, Vol. 99). PMLR, 1392\u20131393."},{"key":"e_1_3_2_1_35_1","unstructured":"Mehrdad Ghadiri Richard Peng and Santosh S Vempala. 2021. Sparse Regression Faster than d^\u03c9. arXiv preprint arXiv:2109.11537.  Mehrdad Ghadiri Richard Peng and Santosh S Vempala. 2021. Sparse Regression Faster than d^\u03c9. arXiv preprint arXiv:2109.11537."},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3406325.3451058"},{"key":"e_1_3_2_1_37_1","unstructured":"Tarun Kathuria. 2020. A Potential Reduction Inspired Algorithm for Exact Max Flow in Almost \"0365O(m^4\/3) Time. arXiv preprint arXiv:2009.03260.  Tarun Kathuria. 2020. A Potential Reduction Inspired Algorithm for Exact Max Flow in Almost \"0365O(m^4\/3) Time. arXiv preprint arXiv:2009.03260."},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/FOCS46700.2020.00020"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/2213977.2213979"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313276.3316410"},{"key":"e_1_3_2_1_41_1","volume-title":"Efficient Inverse Maintenance and Faster Algorithms for Linear Programming. In IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015","author":"Lee Yin Tat","year":"2015","unstructured":"Yin Tat Lee and Aaron Sidford . 2015 . Efficient Inverse Maintenance and Faster Algorithms for Linear Programming. In IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015 , Berkeley, CA, USA , 17-20 October, 2015. 230\u2013249. Yin Tat Lee and Aaron Sidford. 2015. Efficient Inverse Maintenance and Faster Algorithms for Linear Programming. In IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015. 230\u2013249."},{"key":"e_1_3_2_1_42_1","unstructured":"Yin Tat Lee and Aaron Sidford. 2019. Solving linear programs with Sqrt (rank) linear system solves. arXiv preprint arXiv:1910.08033.  Yin Tat Lee and Aaron Sidford. 2019. Solving linear programs with Sqrt (rank) linear system solves. arXiv preprint arXiv:1910.08033."},{"key":"e_1_3_2_1_43_1","unstructured":"Yin Tat Lee Zhao Song and Qiuyi Zhang. 2019. Solving Empirical Risk Minimization in the Current Matrix Multiplication Time. In COLT. arxiv:1905.04447  Yin Tat Lee Zhao Song and Qiuyi Zhang. 2019. Solving Empirical Risk Minimization in the Current Matrix Multiplication Time. In COLT. arxiv:1905.04447"},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.4064\/sm-63-2-207-212"},{"key":"e_1_3_2_1_45_1","article-title":"Trust region Newton method for large-scale logistic regression","volume":"9","author":"Lin Chih-Jen","year":"2008","unstructured":"Chih-Jen Lin , Ruby C Weng , and S Sathiya Keerthi . 2008 . Trust region Newton method for large-scale logistic regression .. Journal of Machine Learning Research , 9 , 4 (2008). Chih-Jen Lin, Ruby C Weng, and S Sathiya Keerthi. 2008. Trust region Newton method for large-scale logistic regression.. Journal of Machine Learning Research, 9, 4 (2008).","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"crossref","unstructured":"Yang P Liu and Aaron Sidford. 2020. Faster divergence maximization for faster maximum flow. arXiv preprint arXiv:2003.08929.  Yang P Liu and Aaron Sidford. 2020. Faster divergence maximization for faster maximum flow. arXiv preprint arXiv:2003.08929.","DOI":"10.1145\/3357713.3384247"},{"key":"e_1_3_2_1_47_1","volume-title":"Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020","author":"Yang","year":"2020","unstructured":"Yang P. Liu and Aaron Sidford. 2020. Faster energy maximization for faster maximum flow . In Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020 , Chicago, IL, USA , June 22-26, 2020 . 803\u2013814. Yang P. Liu and Aaron Sidford. 2020. Faster energy maximization for faster maximum flow. In Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA, June 22-26, 2020. 803\u2013814."},{"key":"e_1_3_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1137\/16M1099546"},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/FOCS.2013.35"},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/FOCS.2016.70"},{"key":"e_1_3_2_1_51_1","volume-title":"Mahoney","author":"Meng Xiangrui","year":"2013","unstructured":"Xiangrui Meng and Michael W . Mahoney . 2013 . Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In STOC. ACM , 91\u2013100. Xiangrui Meng and Michael W. Mahoney. 2013. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In STOC. ACM, 91\u2013100."},{"key":"e_1_3_2_1_52_1","volume-title":"Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013 (JMLR Workshop and Conference Proceedings","volume":"896","author":"Meng Xiangrui","unstructured":"Xiangrui Meng and Michael W. Mahoney . 2013. Robust Regression on MapReduce . In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013 (JMLR Workshop and Conference Proceedings , Vol. 28). JMLR.org, 888\u2013 896 . http:\/\/proceedings.mlr.press\/v28\/meng13b.html Xiangrui Meng and Michael W. Mahoney. 2013. Robust Regression on MapReduce. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013 (JMLR Workshop and Conference Proceedings, Vol. 28). JMLR.org, 888\u2013896. http:\/\/proceedings.mlr.press\/v28\/meng13b.html"},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1137\/110833786"},{"key":"e_1_3_2_1_54_1","doi-asserted-by":"crossref","unstructured":"Yurii Nesterov. 2019. Implementable tensor methods in unconstrained convex optimization. Mathematical Programming 1\u201327.  Yurii Nesterov. 2019. Implementable tensor methods in unconstrained convex optimization. Mathematical Programming 1\u201327.","DOI":"10.1007\/s10107-019-01449-1"},{"key":"e_1_3_2_1_55_1","unstructured":"Yurii E Nesterov. 1983. A method for solving the convex programming problem with convergence rate O(1\/k^2). In Dokl. akad. nauk Sssr. 269 543\u2013547.  Yurii E Nesterov. 1983. A method for solving the convex programming problem with convergence rate O(1\/k^2). In Dokl. akad. nauk Sssr. 269 543\u2013547."},{"key":"e_1_3_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1287\/moor.1080.0348"},{"key":"e_1_3_2_1_57_1","first-page":"1","article-title":"A polynomial-time algorithm, based on Newton\u2019s method, for linear programming","volume":"40","author":"Renegar James","year":"1988","unstructured":"James Renegar . 1988 . A polynomial-time algorithm, based on Newton\u2019s method, for linear programming . Math. Program. , 40 , 1 - 3 (1988), 59\u201393. James Renegar. 1988. A polynomial-time algorithm, based on Newton\u2019s method, for linear programming. Math. Program., 40, 1-3 (1988), 59\u201393.","journal-title":"Math. Program."},{"key":"e_1_3_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1137\/080734029"},{"key":"e_1_3_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/SFCS.1989.63499"},{"key":"e_1_3_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1007\/BF01580859"},{"key":"e_1_3_2_1_61_1","volume-title":"Banach spaces for analysts","author":"Wojtaszczyk Przemyslaw","unstructured":"Przemyslaw Wojtaszczyk . 1996. Banach spaces for analysts . Cambridge University Press . Przemyslaw Wojtaszczyk. 1996. Banach spaces for analysts. Cambridge University Press."},{"key":"e_1_3_2_1_62_1","volume-title":"COLT 2013 - The 26th Annual Conference on Learning Theory","author":"David","year":"2013","unstructured":"David P. Woodruff and Qin Zhang. 2013. Subspace Embeddings and \u2113 _p-Regression Using Exponential Random Variables . In COLT 2013 - The 26th Annual Conference on Learning Theory , June 12-14, 2013 , Princeton University, NJ, USA. 546\u2013567. David P. Woodruff and Qin Zhang. 2013. Subspace Embeddings and \u2113 _p-Regression Using Exponential Random Variables. In COLT 2013 - The 26th Annual Conference on Learning Theory, June 12-14, 2013, Princeton University, NJ, USA. 546\u2013567."},{"key":"e_1_3_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611974331.ch41"}],"event":{"name":"STOC '22: 54th Annual ACM SIGACT Symposium on Theory of Computing","location":"Rome Italy","acronym":"STOC '22","sponsor":["SIGACT ACM Special Interest Group on Algorithms and Computation Theory"]},"container-title":["Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3519935.3519971","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3519935.3519971","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T17:49:38Z","timestamp":1750268978000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3519935.3519971"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,9]]},"references-count":63,"alternative-id":["10.1145\/3519935.3519971","10.1145\/3519935"],"URL":"https:\/\/doi.org\/10.1145\/3519935.3519971","relation":{},"subject":[],"published":{"date-parts":[[2022,6,9]]},"assertion":[{"value":"2022-06-10","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}