An Optimistic Acceleration of AMSGrad for Nonconvex Optimization

Jun-Kun Wang (Yale University); Xiaoyun Li (Baidu Research); Belhal Karimi (Baidu Research)*; Ping Li (Baidu)
PMLR Page

Abstract

We propose a new variant of AMSGrad (Reddi et al., 2018), a popular adaptive gradient based optimization algorithm widely used for training deep neural networks. Our algorithm adds prior knowledge about the sequence of consecutive mini-batch gradients and leverages its underlying structure making the gradients sequentially predictable. By exploiting the predictability process and ideas from optimistic online learning, the proposed algorithm can accelerate the convergence and increase its sample efficiency. After establishing a tighter upper bound under some convexity conditions on the regret, we offer a complimentary view of our algorithm which generalizes to the offline and stochastic nonconvex optimization settings. In the nonconvex case, we establish a non-asymptotic convergence bound independent of the initialization. We illustrate, via numerical experiments, the practical speedup on several deep learning models and benchmark datasets.