Zeroth-order algorithms for smooth saddle-point problems

Abstract

Saddle-point problems have recently gained an increased attention from the machine learning community, mainly due to applications in training Generative Adversarial Networks using stochastic gradients. At the same time, in some applications only a zeroth-order oracle is available. In this paper, we propose several algorithms to solve stochastic smooth (strongly) convex-concave saddle-point problems using zeroth-order oracles, and estimate their convergence rate and its dependence on the dimension n of the variable. In particular, our analysis shows that in the case when the feasible set is a direct product of two simplices, our convergence rate for the stochastic term is only by a logđť‘› factor worse than for the first-order methods. Finally, we demonstrate the practical performance of our zeroth-order methods on practical problems.

Publication
Mathematical Optimization Theory and Operations Research: Recent Trends: 20th International Conference, MOTOR 2021, Irkutsk, Russia, July 5–10, 2021, Revised Selected Papers 20