{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,22]],"date-time":"2025-12-22T18:35:04Z","timestamp":1766428504168,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":37,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,6,15]],"date-time":"2021-06-15T00:00:00Z","timestamp":1623715200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"NSF","doi-asserted-by":"publisher","award":["CCF-1908347"],"award-info":[{"award-number":["CCF-1908347"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,6,15]]},"DOI":"10.1145\/3406325.3451097","type":"proceedings-article","created":{"date-parts":[[2021,6,16]],"date-time":"2021-06-16T01:26:13Z","timestamp":1623806773000},"page":"896-909","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Greedy adversarial equilibrium: an efficient alternative to nonconvex-nonconcave min-max optimization"],"prefix":"10.1145","author":[{"given":"Oren","family":"Mangoubi","sequence":"first","affiliation":[{"name":"Worcester Polytechnic Institute, USA"}]},{"given":"Nisheeth K.","family":"Vishnoi","sequence":"additional","affiliation":[{"name":"Yale University, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,6,15]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Last-iterate convergence rates for min-max optimization. arXiv preprint arXiv:1906.02027","author":"Abernethy Jacob","year":"2019","unstructured":"Jacob Abernethy, Kevin A Lai, and Andre Wibisono. 2019. Last-iterate convergence rates for min-max optimization. arXiv preprint arXiv:1906.02027, 2019."},{"key":"e_1_3_2_1_2_1","volume-title":"Local Saddle Point Optimization: A Curvature Exploitation Approach. In The 22nd International Conference on Artificial Intelligence and Statistics. Pages 486\u2013495","author":"Adolphs Leonard","year":"2019","unstructured":"Leonard Adolphs, Hadi Daneshmand, Aurelien Lucchi, and Thomas Hofmann. 2019. Local Saddle Point Optimization: A Curvature Exploitation Approach. In The 22nd International Conference on Artificial Intelligence and Statistics. Pages 486\u2013495."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3055399.3055464"},{"key":"e_1_3_2_1_4_1","unstructured":"Zeyuan Allen-Zhu. 2018. Natasha 2: Faster non-convex optimization than SGD. In Advances in Neural Information Processing Systems. Pages 2675\u20132686."},{"key":"e_1_3_2_1_5_1","volume-title":"Conference on Learning Theory. Pages 242\u2013299","author":"Arjevani Yossi","year":"2020","unstructured":"Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Ayush Sekhari, and Karthik Sridharan. 2020. Second-order information in non-convex stochastic optimization: Power and limitations. In Conference on Learning Theory. Pages 242\u2013299."},{"key":"e_1_3_2_1_6_1","unstructured":"Avrim Blum and Ronald L Rivest. 1989. Training a 3-node neural network is NP-complete. In Advances in Neural Information Processing Systems. Pages 494\u2013501."},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1007\/s001459910006"},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1137\/17M1114296"},{"key":"e_1_3_2_1_9_1","unstructured":"Constantinos Daskalakis and Ioannis Panageas. 2018. The limit points of (optimistic) gradient descent in min-max optimization. In Advances in Neural Information Processing Systems. Pages 9236\u20139246."},{"key":"e_1_3_2_1_10_1","volume-title":"The Complexity of Constrained Min-Max Optimization. In ACM SIGACT Symposium on Theory of Computing (STOC","author":"Daskalakis Constantinos","year":"2021","unstructured":"Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis. 2021. The Complexity of Constrained Min-Max Optimization. In ACM SIGACT Symposium on Theory of Computing (STOC 2021)."},{"key":"e_1_3_2_1_11_1","first-page":"2018","article-title":"Near-optimal non-convex optimization via stochastic path integrated differential estimator","volume":"31","author":"Fang C","year":"2018","unstructured":"C Fang, CJ Li, Z Lin, and T Zhang. 2018. Near-optimal non-convex optimization via stochastic path integrated differential estimator. Advances in Neural Information Processing Systems, 31, 2018. Pages 689.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_12_1","volume-title":"Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms. Pages 385\u2013394","author":"Flaxman Abraham D","year":"2005","unstructured":"Abraham D Flaxman, Adam Tauman Kalai, and H Brendan McMahan. 2005. Online convex optimization in the bandit setting: gradient descent without a gradient. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms. Pages 385\u2013394."},{"key":"e_1_3_2_1_13_1","volume-title":"Conference on Learning Theory. Pages 797\u2013842","author":"Ge Rong","year":"2015","unstructured":"Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. 2015. Escaping from saddle points\u2014online stochastic gradient for tensor decomposition. In Conference on Learning Theory. Pages 797\u2013842."},{"key":"e_1_3_2_1_14_1","volume-title":"International Conference on Learning Representations (ICLR","author":"Gidel Gauthier","year":"2019","unstructured":"Gauthier Gidel, Hugo Berard, Ga\u00ebtan Vignoud, Pascal Vincent, and Simon Lacoste-Julien. 2019. A variational inequality perspective on generative adversarial networks. In International Conference on Learning Representations (ICLR 2019)."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1016\/0022-0000(84)90070-9"},{"key":"e_1_3_2_1_16_1","unstructured":"Ian Goodfellow Jean Pouget-Abadie Mehdi Mirza Bing Xu David Warde-Farley Sherjil Ozair Aaron Courville and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. Pages 2672\u20132680."},{"volume-title":"Codes for Computationally Simple Channels: Explicit Constructions with Optimal Rate. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science. Pages 723\u2013732","author":"Guruswami V.","key":"e_1_3_2_1_17_1","unstructured":"V. Guruswami and A. Smith. 2010. Codes for Computationally Simple Channels: Explicit Constructions with Optimal Rate. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science. Pages 723\u2013732."},{"key":"e_1_3_2_1_18_1","unstructured":"Martin Heusel Hubert Ramsauer Thomas Unterthiner Bernhard Nessler and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems. Pages 6626\u20136637."},{"key":"e_1_3_2_1_19_1","volume-title":"On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points. arXiv preprint arXiv:1902.04811","author":"Jin Chi","year":"2019","unstructured":"Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M Kakade, and Michael I Jordan. 2019. On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points. arXiv preprint arXiv:1902.04811, 2019."},{"key":"e_1_3_2_1_20_1","volume-title":"ICML","author":"Jin Chi","year":"2020","unstructured":"Chi Jin, Praneeth Netrapalli, and Michael I. Jordan. 2020. What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? In ICML 2020."},{"key":"e_1_3_2_1_21_1","volume-title":"Vishnoi","author":"Keswani Vijay","year":"2020","unstructured":"Vijay Keswani, Oren Mangoubi, Sushant Sachdeva, and Nisheeth K. Vishnoi. 2020. GANs with First-Order Greedy Discriminators. arXiv preprint arXiv:2006.12376, 2020."},{"key":"e_1_3_2_1_22_1","volume-title":"Solving weakly-convex-weakly-concave saddle-point problems as weakly-monotone variational inequality. arXiv preprint arXiv:1810.10207","author":"Lin Qihang","year":"2018","unstructured":"Qihang Lin, Mingrui Liu, Hassan Rafique, and Tianbao Yang. 2018. Solving weakly-convex-weakly-concave saddle-point problems as weakly-monotone variational inequality. arXiv preprint arXiv:1810.10207, 2018."},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1007\/3-540-57785-8_183"},{"key":"e_1_3_2_1_24_1","unstructured":"Mingrui Liu Zhe Li Xiaoyu Wang Jinfeng Yi and Tianbao Yang. 2018. Adaptive negative curvature descent with applications in non-convex optimization. In Advances in Neural Information Processing Systems. Pages 4853\u20134862."},{"key":"e_1_3_2_1_25_1","volume-title":"ICLR","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. ICLR, 2018."},{"key":"e_1_3_2_1_26_1","volume-title":"Greedy Adversarial Equilibrium: An Efficient Alternative to Nonconvex-Nonconcave Min-Max Optimization. arXiv preprint arXiv:2006.12363","author":"Mangoubi Oren","year":"2020","unstructured":"Oren Mangoubi and Nisheeth K Vishnoi. 2020. Greedy Adversarial Equilibrium: An Efficient Alternative to Nonconvex-Nonconcave Min-Max Optimization. arXiv preprint arXiv:2006.12363, 2020."},{"key":"e_1_3_2_1_27_1","volume-title":"The computational complexity of training ReLU(s). arXiv preprint arXiv:1810.04207","author":"Manurangsi Pasin","year":"2018","unstructured":"Pasin Manurangsi and Daniel Reichman. 2018. The computational complexity of training ReLU(s). arXiv preprint arXiv:1810.04207, 2018."},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.304"},{"key":"e_1_3_2_1_29_1","volume-title":"On finding local nash equilibria (and only local nash equilibria) in zero-sum games. arXiv preprint arXiv:1901.00838","author":"Mazumdar Eric V","year":"2019","unstructured":"Eric V Mazumdar, Michael I Jordan, and S Shankar Sastry. 2019. On finding local nash equilibria (and only local nash equilibria) in zero-sum games. arXiv preprint arXiv:1901.00838, 2019."},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-30576-7_1"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10107-006-0706-8"},{"key":"e_1_3_2_1_32_1","unstructured":"Maher Nouiehed Maziar Sanjabi Tianjian Huang Jason D Lee and Meisam Razaviyayn. 2019. Solving a class of non-convex min-max games using iterative first order methods. In Advances in Neural Information Processing Systems. Pages 14934\u201314942."},{"key":"e_1_3_2_1_33_1","volume-title":"Non-convex min-max optimization: Provable algorithms and applications in machine learning. arXiv preprint arXiv:1810.02060","author":"Rafique Hassan","year":"2018","unstructured":"Hassan Rafique, Mingrui Liu, Qihang Lin, and Tianbao Yang. 2018. Non-convex min-max optimization: Provable algorithms and applications in machine learning. arXiv preprint arXiv:1810.02060, 2018."},{"key":"e_1_3_2_1_34_1","volume-title":"International Conference on Artificial Intelligence and Statistics. Pages 1233\u20131242","author":"Reddi Sashank","year":"2018","unstructured":"Sashank Reddi, Manzil Zaheer, Suvrit Sra, Barnabas Poczos, Francis Bach, Ruslan Salakhutdinov, and Alex Smola. 2018. A Generic Approach for Escaping Saddle points. In International Conference on Artificial Intelligence and Statistics. Pages 1233\u20131242."},{"key":"e_1_3_2_1_35_1","volume-title":"On a game without a value. Contributions to the theory of games, 3","author":"Sion Maurice","year":"1957","unstructured":"Maurice Sion and Philip Wolfe. 1957. On a game without a value. Contributions to the theory of games, 3, 1957. Pages 299\u2013306."},{"key":"e_1_3_2_1_36_1","volume-title":"NeurIPS","author":"Thekumparampil Kiran Koshy","year":"2019","unstructured":"Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, and Sewoong Oh. 2019. Efficient Algorithms for Smooth Minimax Optimization. NeurIPS, 2019."},{"key":"e_1_3_2_1_37_1","volume-title":"On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach. In International Conference on Learning Representations.","author":"Wang Yuanhao","year":"2020","unstructured":"Yuanhao Wang, Guodong Zhang, and Jimmy Ba. 2020. On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach. In International Conference on Learning Representations."}],"event":{"name":"STOC '21: 53rd Annual ACM SIGACT Symposium on Theory of Computing","sponsor":["SIGACT ACM Special Interest Group on Algorithms and Computation Theory"],"location":"Virtual Italy","acronym":"STOC '21"},"container-title":["Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3406325.3451097","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3406325.3451097","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3406325.3451097","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:24:53Z","timestamp":1750195493000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3406325.3451097"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,6,15]]},"references-count":37,"alternative-id":["10.1145\/3406325.3451097","10.1145\/3406325"],"URL":"https:\/\/doi.org\/10.1145\/3406325.3451097","relation":{},"subject":[],"published":{"date-parts":[[2021,6,15]]},"assertion":[{"value":"2021-06-15","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}