This post summarizes joint work with Anima on a new algorithm for competitive optimization: Competitive gradient descent (CGD). If you want to know more, you should check out the paper, play with Hongkai’s pytorch code, or talk to us at NeurIPS 2019, where we will present a poster on Thursday Dec 12th from 10:45 AM to 12:45 PM in the east exhibition hall (poster # 195).
Many learning algorithms are modelled as a single agent minimizing a loss function, such as empirical risk. However, the spectacular successes of generative adversarial networks (GANs) have renewed interest in algorithms that are modeled after multiple agents that compete in optimizing their own objective functions, which we refer to as competitive optimization.
Since much of single agent machine learning is powered by variants of gradient descent, this raises the important question: What is the natural generalization of gradient descent to competitive optimization? In this note, I will try to convince you that this natural generalization of gradient descent is a novel algorithm, with a beautiful game-theoretic interpretation and promising practical performance.
The problem with simultaneous gradient descent
Consider a single-agent optimization problem, Gradient descent (GD) with step size is given by the update rule
where the gradient is the vector containing the partial derivatives of , taken in the last iterate . The vector points in the direction of the steepest descent of the loss function in the point , which is why gradient descent is also referred to as the method of steepest descent.
Let us now move to the competitive optimization problem:
restricting ourselves to two agents for the sake of simplicity. Here, the first agent tries to choose such as to minimize , while the second agent tries to choose the decision variable such as to minimize . The interesting part is that the optimal choice of depends of and vice versa, and the objectives of the two players will in general be at odds with each other, the important special case corresponding to zero-sum or minimax games.
Since neither player can know what the other player will do, they might assume each other to not move at all. Under this assumption, following the direction of steepest descent seems like a reasonable strategy, leading to simultaneous gradient descent (SimGD).
Here, and denote the gradient with respect to the variables and , respectively.
Unfortunately, even on the most simple bilinear minimiax problem , SimGD fails to converge to the Nash equilibrium . Instead, its trajectories form ever larger cycles as the two players chase each other in strategy space. The oscillatory behavior of SimGD is not restricted to this toy problem and a variety of corrections have been proposed in the literature.
Gradient descent revisited
Rather than adding modifications to SimGD, we begin by revisiting gradient descent. It is well-known that the GD update can equivalently be written as
Here, is the -matrix containing the partial derivatives of . This can be interpreted as the agent the linear approximation in the last iterate, , adding a quadrative regularization term that expresses her distrust of this approximation far away form the point of linearization. This suggests that for multiple players, the gradient descent update should be the solution of a local first order approximation of the full problem, with quadratic regularization terms on each player that express their limited confidence in this approximation.
Linear or bilinear
This begs a fundamental question: What is the right notion of local first order approximation for multi-agent optimization problems?. In single-agent optimization, the local first order approximation of the problem is obtained as the linear approximation of the objective functions. If we use a linear approximation of both agents’ loss function, we obtain the following game.
Here and in the following, the evaluations of loss functions and their derivatives always occur in the last iterate , unless otherwise mentioned. When looking at the above local game we observe that the optimal strategy of player is inddependent of and vice versa. Thus, the above game is equivalent to
which leads to the update rule of SimGD. One explanation for the poor convergence properties of SimGD is that the underlying local game has completely lost the underlying game-theoretic structure and instead consists of both players myopically minimizing their own objective function.
Instead of generalizing the linear approximation in the single-agent case to a linear approximation in the multi-agent case, we could also generalize it to a multilinear approximation. Instead of using polynomial terms up to first order (, , , , , ) we use use derivatives up to first order per agent. In the two-agent setting, this is the bilinear approximation obtained by including the “mixed” second derivatives (, , , and ) in the approximation, while omitting the “pure” second derivatives (, , , and ). The resulting local game is
which can be simplified to
This local game preserves the interactive aspect of the underlying problem, since the optimal action of depends on the next move of and vice versa. The local game obtained from bilinear approximation also has the property that the functions and are convex. For this type of game, a natural notion of solution is given by a Nash equilibrium, that is a point such that neither of the two players can unilaterally improve their payoff. Indeed, one can show that the unique Nash-equilibrium of this game is given by
Using this solution as our update, we obtain a new algorithm, which we refer to as competitive gradient descent (CGD).
What I think that they think that I think … that I do
For small enough , the matrix can be expanded in a Neumann series as
If we apply this identity to the matrix inverse in the CGD update rule, the partial summands of this series have an inuitive interpretation as a cognitive hierarchy: The first summand yields the update rule of SimGD
which is the optimal strategy for the local game if we assume that the other player stays still. The second partial sum yields the update rule
which is the optimal strategy under the assumption that the other agent makes the gradient descent update, that is assuming the other agent assumes that we stay still. The third partial sum is the optimal strategy assuming that the other player assumes that we assume that they stay still and so forth, until the Nash equilibrium is recovered in the limit. In principle, the Neumann series could be used to approximate the matrix inverse in the update rule, which would amount to using Richardson iteration. However, the matrix inverse is defined even in settings the Neumann series might not converge and by can using optimal Krylov subspace methods such as conjugate gradient, we can obtain significantly better approximations with fewer Hessian-vector products.
Despite the game-theoretic interpretation of CGD, the choice of a bilinear local approximation might still seem arbitrary. Indeed, the “normal” thing to do in optimization would be to go from the linear approximation underlying SimGD straight to a quadratic approximation, leading for instance to a damped and regularized Netwton’s method given by
In our work on CGD, we argue that the hierarchy of approximations in competitive optimization is fundamentally different from the corresponding hierarchy in single-agent optimization. Instead of linear, quadratic, cubic, etc. approximations of the loss function, it is more natural to consider approximations that are linear, quadratic, cubic, etc. in each player. In particular, the natural notion of first order approximation is given by the bilinear approximation of the objective function. In the following, I will present three justifications for this claim.
Reason 1: Getting the invariances right
One reason to only consider linear, quadratic, cubic, etc. approximation in single-agent optimization is that we want our approximation to be independent of the coordinate system we use to represent . Indeed, we can check that for an invertible matrix and we have
where the derivative on the left side is taken in a basepoint , and the one on the right side, in the corresponding point .
In words: taking the linear approximation in the original coordinate system yields the same result as applying the coordinate transform , taking the linear approximation, and then transforming back as . This property holds for all orders (linear, quadratic, cubic etc.) of polynomial approximation.
For single agent problems given in an arbitrary coordinate system this is clearly a desirable feature, but do we want this invariance for competitive optimization?
For instance, could be a permutation matrix that takes a decision variable under the control of and swaps it for a decision variable under the control of . This is not just a different way to represent the same problem but may be a drastically different game. Therefore, we do not want to be invariant to this transformation and having this invariance built into the first, second, third, etc. order approximations is a severe limitation.
In contrast, the bilinear approximation only is only invariant to reparametrizations of the strategy space of each player in isolation, but not to a reassignment of the decision variables accross players. Mathematically we have
in general only if is block-diagonal. Based on the above arguments, this is exactly the right set of invariances to be built into the approximation.
Reason 2: Bilinear plays well with quadratic regularization
One downside of Newton’s method in nonconvex optimization is that its update rule can amount to players choosing their local worst strategy if the critical point is a local maximum instead of a local minimum. This can be countered by adaptive choice of step sizes, trust-region methods, or cubic regularization, but a distinct benefit of first order methods is that their updates always amount to optimal strategies of the local game. Bilinear approximation preserves this important property while at the same time leading to an interactive local problem.
Reason 3: Fully exploiting first order regularity
Many competitive optimization problems have the structure
where the functions and are highly regular, but the functions and might only have first order regularity. In the setting of GANs for instance, maps the generator weights to the induced probability measure and the map maps the discriminator weights to the induced classifier. In the original GAN, the function would then be given as . We observe that in this case the mixed derivative is well behaved, since only one derivative falls onto each and . The “pure” second derivatives and however require second order regularity of or . Thus, instead of requiring second order regularity, the bilinear approximation fully exploits the first order regularity present in many competitive optimization problems.
Does it work?
Gaussian mixture GAN
As a first experiment, we tried using CGD on GAN fitting a bimodal Gaussian mixture. While this is obviously a simple problem that can be solved with a variety of algorithms, it has the advantage that it lends itself to an easy visualization. With many of the existing methods we observed a strong cycling behavior with generator and discriminator chasing each other between the two modes. In contrast, throughout all step sizes that we tried, CGD seemed to show initial cycling behavior followed by a rapid splitting on the two modes that stayed stable throughout the experiment. We emphasize that the other methods surely could be made work on this problem with the right hyperparameters. The main point of interest of these experiments is the sudden splitting of mass observed when using CGD.
Visualization of Gaussian mixture GAN. The triangles denote true data while the circles denote fake data. Lighter color represents a larger confidence of the discriminator in the data to be true data. The green arrows show the movement of the present fake data under the next weight update of the generator. The first video shows the frequently observed chasing between the two modes, that eventually diverges. The second video shows that when using CGD, the mass suddenly splits among the two modes and then stays stable in this configuration.
In order to study the convergence speed of CGD, we consider a linear-quadratic covariance estimation problem given by the loss function
The main take-away is that while CGD has a higher cost per iteration than other methods, it is able to take larger steps without diverging, which often allows it to converge faster even when accounting for the Hessian vector products required for computing the matrix inverse in the CGD update using iterative methods.
Image GANs on CIFAR10 and implicit competitive regularization
As part of a separate work, Hongkai, Anima, and I have investigated the performance of CGD on image GANs. For instance, we observe that when taking an existing implementation of WGAN-GP, removing the gradient penalty, and instead training with CGD, we obtain an improved inception score of CIFAR10! We explain this behavior with an implicit regularization induced by CGD. If you want to know more you should check out the paper or drop by the SGO & ML workshop this Saturday at Neurips. Of course there is still a lot to explore, so feel free to check out Hongkai’s pytorch implementation of CGD and try out CGD on your own problems!
CGD for equality constrained optimization
An important class of competitive optimization problems arises from equality constrained optimization problems
that can be rewritten as
using a Lagrange multiplier .
Pierre-Luc, Clement, Anima, Emma, and I are presently investigating the effectiveness of CGD in the context of equality constrained optimization problems arising in reinforcement learning (RL) and control.
If you are interested to learn more, check out our workshop paper, the implementation using JAX, or our poster at the NeurIPS 2019 workshop on optimization for RL.
In the above, I have tried to convince you that CGD is indeed the natural generalization of gradient descent to two-player games.
In future posts and papers, I hope to comment in more detail on extensions to multiple players and higher order regularity, as well as the implicit regularization of CGD and how it can meaningfully stabilize GAN training even in the absence of Nash equilibria.