Title

Lifting Multi-agent A* - Implementing a New Optimisation Problem in Isomorphic DecPOMDPs: How many agents do we need?

Abstract

Artificial intelligence in the context of autonomously acting entities, also called agents, requires methods for efficient planning of agents' actions. In this context, Decentralized Partially Observable Markov Decision Processes (DEC- POMDPs) are the basic theoretical framework for modeling problems for cooperative teams of autonomously acting agents. The solution of DEC-POMDPs consists of concrete action plans of the individual agents, called policies. In certain scenarios, the set of agents can be partitioned into groups behaving according to the same rules, allowing for computing policies for representatives of whole groups instead of computing policies individually. Different solution approaches and algorithms have emerged to compute policies in general. One such algorithm is Multi-Agent A* (MAA*). This setting also allows a new optimisation problem that asks for the number of agents needed to reach a given utility. The goal of this thesis is to adapt MAA* to this setting of partitioned agent sets, computing solutions for representatives, and implement this new optimisation problem based on the MAA* solution.

Requirements

Decision theory, (lifted inference)

Person working on it

Constantin Castan

Category

Master thesis