SOTAVerified

Fairness in Reinforcement Learning

2019-07-24Unverified0· sign in to hype

Paul Weng

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Decision support systems (e.g., for ecological conservation) and autonomous systems (e.g., adaptive controllers in smart cities) start to be deployed in real applications. Although their operations often impact many users or stakeholders, no fairness consideration is generally taken into account in their design, which could lead to completely unfair outcomes for some users or stakeholders. To tackle this issue, we advocate for the use of social welfare functions that encode fairness and present this general novel problem in the context of (deep) reinforcement learning, although it could possibly be extended to other machine learning tasks.

Tasks

Reproductions