Title

A Closer Look at Decentralised Partially Observable Markov Decision Problems: Implementing and Evaluating a Memory-bounded Solution Approach

Abstract

Artificial intelligence in the context of autonomously acting entities, also called agents, requires methods for efficient planning of agents' actions. In this context, Decentralized Partially Observable Markov Decision Processes (DEC-POMDPs) are the basic theoretical framework for modeling problems for cooperative teams of autonomously acting agents. The solution of DEC-POMDPs consists of concrete action plans of the individual agents, called policies. Different solution approaches and algorithms have emerged to compute these. One area of such algorithms is grouped under approximate "bottom-up" approaches, to which the algorithm considered here belongs. The topic of the thesis would be the theoretical elaboration, implementation, and evaluation of the Memory-Bounded Dynamic Programming (MBDP) algorithm based on Seuken/Zilberstein (2007).

Requirements

Decision theory

Category

Master thesis