SOTAVerified

An Analysis of Logit Learning with the r-Lambert Function

2024-09-08Unverified0· sign in to hype

Rory Gavin, Ming Cao, Keith Paarporn

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The well-known replicator equation in evolutionary game theory describes how population-level behaviors change over time when individuals make decisions using simple imitation learning rules. In this paper, we study evolutionary dynamics based on a fundamentally different class of learning rules known as logit learning. Numerous previous studies on logit dynamics provide numerical evidence of bifurcations of multiple fixed points for several types of games. Our results here provide a more explicit analysis of the logit fixed points and their stability properties for the entire class of two-strategy population games -- by way of the r-Lambert function. We find that for Prisoner's Dilemma and anti-coordination games, there is only a single fixed point for all rationality levels. However, coordination games exhibit a pitchfork bifurcation: there is a single fixed point in a low-rationality regime, and three fixed points in a high-rationality regime. We provide an implicit characterization for the level of rationality where this bifurcation occurs. In all cases, the set of logit fixed points converges to the full set of Nash equilibria in the high rationality limit.

Tasks

Reproductions