SOTAVerified

Multi-Task Learning as a Bargaining Game

2022-02-02Code Available2· sign in to hype

Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks. Joint training reduces computation costs and improves data efficiency; however, since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts. A common method for alleviating this issue is to combine per-task gradients into a joint update direction using a particular heuristic. In this paper, we propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update. Under certain assumptions, the bargaining problem has a unique solution, known as the Nash Bargaining Solution, which we propose to use as a principled approach to multi-task learning. We describe a new MTL optimization procedure, Nash-MTL, and derive theoretical guarantees for its convergence. Empirically, we show that Nash-MTL achieves state-of-the-art results on multiple MTL benchmarks in various domains.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Cityscapes testNash-MTLmIoU75.41Unverified
NYUv2Nash-MTLMean IoU40.13Unverified
QM9Nash-MTL∆m%62Unverified

Reproductions