SOTAVerified

Deep Learning Across Games

2024-09-23Unverified0· sign in to hype

Daniele Condorelli, Massimiliano Furlan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We train two neural networks adversarially to play static games. At each iteration, a row and column network observe a new random bimatrix game and output individual mixed strategies. The parameters of each network are independently updated via stochastic gradient descent on a loss defined by the individual squared regret experienced in the game. Simulations show the joint behavior of the trained networks approximates a Nash equilibrium in all games. In 22 games with multiple equilibria, the networks select the risk dominant equilibrium. These findings, which are robust and generalise out-of-distribution, illustrate how equilibrium emerges from learning across heterogeneous games.

Tasks

Reproductions