SOTAVerified

Efficient Offline Communication Policies for Factored Multiagent POMDPs

2011-12-01NeurIPS 2011Unverified0· sign in to hype

João V. Messias, Matthijs Spaan, Pedro U. Lima

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Factored Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) form a powerful framework for multiagent planning under uncertainty, but optimal solutions require a rigid history-based policy representation. In this paper we allow inter-agent communication which turns the problem in a centralized Multiagent POMDP (MPOMDP). We map belief distributions over state factors to an agent's local actions by exploiting structure in the joint MPOMDP policy. The key point is that when sparse dependencies between the agents' decisions exist, often the belief over its local state factors is sufficient for an agent to unequivocally identify the optimal action, and communication can be avoided. We formalize these notions by casting the problem into convex optimization form, and present experimental results illustrating the savings in communication that we can obtain.

Tasks

Reproductions