Minimax optimization recently is widely applied in many machine learning
tasks such as generative adversarial networks, robust learning and
reinforcement learning. In the paper, we study a class of nonconvex-nonconcave
minimax optimization with nonsmooth regularization, where the objective
function is possibly nonconvex on primal variable
x, and it is nonconcave and
satisfies the Polyak-Lojasiewicz (PL) condition on dual variable
y. Moreover,
we propose a class of enhanced momentum-based gradient descent ascent methods
(i.e., MSGDA and AdaMSGDA) to solve these stochastic nonconvex-PL minimax
problems. In particular, our AdaMSGDA algorithm can use various adaptive
learning rates in updating the variables
x and
y without relying on any
specifical types. Theoretically, we prove that our methods have the best known
sample complexity of
O~(ϵ−3) only requiring one sample at
each loop in finding an
ϵ-stationary solution. Some numerical
experiments on PL-game and Wasserstein-GAN demonstrate the efficiency of our
proposed methods.