Statistical Relational AI - Exploiting Symmetries
A tutorial by Tanya Braun, Marcel Gehrke, and Marco Wilhelm at KR 2023
In recent years, a need for compact representations of databases has become apparent, for example in natural language understanding, machine learning, and decision making, accompanied by a need for efficient inference algorithms. This contrasts with the claim to represent large numbers of individuals, objects, and relations among them, as well as uncertainties about attribute values, object identities, or even the existence of individuals, objects, and relations. From this area of tension, many advances have emerged in the field of probabilistic relational modeling for artificial intelligence, also known as statistical relational AI (StaRAI), as addressed in David Poole’s Keynote Talk at KR2020.
A central concept that many approaches in StarAI have in common is the exploitation of symmetries, for example caused by the indistinguishability of certain individuals or objects. In this tutorial, we give a general introduction into StaRAI and show how StaRAI advances long standing problems in the field of AI by exploiting symmetries. In a first focus topic, we address symmetries in probabilistic graphical models. We have a look at identifying most likely sources of events while avoiding a combinatorial explosion when domain sizes increase by using representatives for indistinguishable objects. Further, symmetries in temporal inference allow for efficient inference even for huge domain sizes by approximating symmetries over time, while propositional approaches already struggle with small domain sizes. In a second focus topic, we investigate symmetries in conditional knowledge bases and in probability distributions with maximal entropy with the overall goal of lifted inference at maximum entropy.
A collaborative effort between
Target Audience, Prerequisite Knowledge, and Learning Goals
The tutorial provides an introduction into StaRAI by focussing on how symmetries are used in StaRAI formalisms. It starts with an overview of the basics and then deep-dives into two different formalisms exploiting symmetries, probabilistic graphical models and conditional knowledge bases, our two focus topics. Hence, the tutorial is of interest to a broad range of KR participants as no special previous knowledge is necessary. The focus lies on the problems and ideas to solve them. Necessary knowledge is covered in the basics and expanded upon in the focus topics. However, the tutorial is not only for StaRAI novices as our two focus topics highlight recent advances in the field with the introductory part establishing a common ground.
Overall, the audience learns about the motivation of StaRAI and how symmetries can be exploited in different ways. Hopefully, the audience gets inspiration for finding and using symmetries in their own work. In the first focus topic, the audience learns about optimization problems that become solvable in probabilistic graphical models of different flavors that compactly encode those symmetries using relational constructs. In the second focus topic, the audience learns about the merits of exploiting symmetries and the principle of maximum entropy when reasoning with relational probabilistic conditionals.
Outline of the Tutorial
The tutorial is divided into three parts:
The first part provides a gentle introduction into the field of StaRAI by going back to its roots, starting with probabilistic Datalog and advancing to Problog as well as Markov logic networks until we reach parametric factor graphs, parameterised Gaussian Bayesian networks, and probabilistic soft logic. We consider the symmetries that lie behind the formalisms and make the compact representation of knowledge possible. We take a look at inference tasks in the form of query answering in these models and provide an intuition about lifted inference, which exploits the symmetries in the models, as a way to answer queries in an efficient manner.
Focus Topic 1: Exploiting Symmetries in Probabilistic Graphical Models
Building on the introduction, we have a deeper look into how symmetries can help us tackle long standing AI problems. We begin by having a look at the problem that we only observe that an event happened, but cannot observe which individual exhibited the event. Here one has to test which individual exhibiting the event makes our world most probable. On a propositional level all individuals are distinguishable. Thus, already with small domain sizes, one has to test many possible combinations due to a combinatorial blow up. With symmetries, we obtain groups of indistinguishable individuals, which allows us to solve the problem. Afterwards, we introduce time to the mix and show that while increasing domain sizes results in an exponential complexity growth, with symmetries the growth is only logarithmic. To keep temporal reasoning polynomial with respect to domain sizes, we have a closer look at how to approximate symmetries with error bounds, while proceeding in time. Hence, showing that symmetries help us to tackle long standing AI problems.
Focus Topic 2: Exploiting Symmetries in Conditional Knowledge Bases
The second focus topic examines probabilistic reasoning with relational conditional knowledge bases. With the aggregating semantics, we discuss a sophisticated semantics of probabilistic conditionals in which uncertain statements about individuals as well as classes of individuals can be expressed in equal measure. The aggregating semantics combines statistical and subjective aspects of probabilities by counting the instances of a conditional and by weighting these counts with subjective probabilities. We make the connection to weighted first-order model counting (WFOMC) and introduce the concept of conditional knowledge compilation. Eventually, we show that the principle of maximum entropy perfectly fits this concept by treating indistinguishable individuals in the same way so that the exploitation of symmetries (in many cases) leads to a very compact representation of the maximum entropy distribution which is a prerequisite for lifted inference at maximum entropy.
Agenda (including presentation material)
- Introduction [60 min, Tanya; slides]
- Relational Models under Uncertainty
- Lifted Inference in Probabilistic Relational Models
- Exploiting Symmetries in Probabilistic Graphical Models [60 min, Marcel; slides]
- Identifying Most Likely Sources of Events
- Introducing Time
- Approximating Symmetries over Time to Keep Reasoning Polynomial
- Exploiting Symmetries in Conditional Knowledge Bases [80 min, Marco; slides]
- Relational Probabilistic Conditionals
- Principle of Maximum Entropy
- Conditional Knowledge Compilation
- Summary [10 min, Tanya; slides]
- Introduction [60 min, Tanya; slides]
- Tanya Braun and Ralf Möller: Parameterised Queries and Lifted Query Answering. In: IJCAI-18 Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018.
- Luc De Raedt, Angelika Kimmig, and Hannu Toivonen: ProbLog: A Probabilistic Prolog and its Application in Link Discovery. In: IJCAI-07 Proceedings of 20th International Joint Conference on Artificial Intelligence, 2007.
- Norbert Fuhr: Probabilistic Datalog - A Logic for Powerful Retrieval Methods. In: SIGIR-95 Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1995.
- Marcel Gehrke, Ralf Möller, Tanya Braun: Taming Reasoning in Temporal Probabilistic Relational Models. In ECAI-20 Proceedings of the 24th European Conference on Artificial Intelligence, 2020.
- Manfred Jaeger: Relational Bayesian Networks. In: UAI-97 Proceedings of the 13th Conference on Uncertainty in Artificial Intelligence, 1997.
- David Poole: First-order Probabilistic Inference. In: IJCAI-03 Proceedings of the 18th International Joint Conference on Artificial Intelligence, 2003.
- Matthew Richardson and Pedro Domingos: Markov Logic Networks. In: Machine Learning, 62(1-2), 2006.
- Nima Taghipour, Daan Fierens, Jesse Davis, and Hendrik Blockeel: Lifted Variable Elimination: Decoupling the Operators from the Constraint Language. In: Journal of Artificial Intelligence Research, 47(1), 2013
- Taisuke Sato: A Statistical Learning Method for Logic Programs with Distribution Semantics. In: Proceedings of the 12th International Conference on Logic Programming, 1995.
- Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, and Luc De Raedt: Lifted Probabilistic Inference by First-order Knowledge Compilation. In: IJCAI-11 Proceedings of the 22nd International Joint Conference on Artificial Intelligence, 2011.
Marco Wilhelm, Marc Finthammer, Gabriele Kern-Isberner, and Christoph Beierle: First-Order Typed Model Counting for Probabilistic Conditional Reasoning at Maximum Entropy. In: SUM-17 Proceedings of the 11th International Conference on Scalable Uncertainty Management, 2017.