SOTAVerified

Multi-objective Reinforcement Learning: A Tool for Pluralistic Alignment

2024-10-15Unverified0· sign in to hype

Peter Vamplew, Conor F Hayes, Cameron Foale, Richard Dazeley, Hadassah Harland

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Reinforcement learning (RL) is a valuable tool for the creation of AI systems. However it may be problematic to adequately align RL based on scalar rewards if there are multiple conflicting values or stakeholders to be considered. Over the last decade multi-objective reinforcement learning (MORL) using vector rewards has emerged as an alternative to standard, scalar RL. This paper provides an overview of the role which MORL can play in creating pluralistically-aligned AI.

Tasks

Reproductions