Graphical Models, Bayesian Learning and Statistical Relational Learning, 6 credits

This is a brief overview of the content of the course Graphical Models, Bayesian Learning and Relational Learning, given in autumn 2019. Further information will be given later on.

 

Module #1 (23-24 sep. Svante Linusson, KTH)
A fundamental problem in artificial intelligence is to model complex cause-and-effect relationships between large collections of random variables. Typically, cause-effect systems are modeled with a directed acyclic graph (DAG) that encodes a collection of conditional independence relations which are observed to hold in the data-generating distribution.

When an AI system is equipped with such a model, often called a Bayesian network, or an algorithm by which to generate such a model, it can then perform inference to better make decisions within the observed causal framework. In this way, the marriage of a combinatorial object, a DAG, and a family of statistical distributions allows us to model causal relationships in AI research, biology, and the social sciences alike.

In this course, we will parse-out the finer details of how graphs and their combinatorial properties can be used to encode probabilistic as well as causal relationships between random variables. Topics to be discussed include probabilistic graphical models, causal models, interventional distributions and structural learning algorithms.

 

Module #2 (28-29 okt. Fredrik Lindsten, LiU)
Bayesian models are at the core of many machine learning applications. In such models, random variables are used to represent both the observed data and the unknown (or latent) variables of the model. Due to their probabilistic nature, these models are able to systematically represent and cope with the uncertainty that is inherent to most data. Furthermore, they offer a high degree of flexibility in constructing complex models from simple parts. For instance, this can be accomplished by modeling both observed and latent variables using a probabilistic graphical model, which encodes the structure of the model by modeling the conditional independecies among these variables.

A central task in Bayesian modeling is to infer the latent variables from data; that is, to compute the posterior probability distribution over the latent variables conditionally on the data. In most cases this leads to an intractable integration problem, which calls for approximate inference algorithms. In this course module we will introduce some of the most widely used algorithms of this type, including Markov chain Monte Carlo, approximate message-passing, and variational inference, with a particular emphasis on inference in probabilistic graphical models.

 

Module #3 (25-26 nov. Vera Koponen, UU)
Formal logic is a powerful tool for knowledge representation and inference when information is certain. But often information is uncertain. This module combines logic with probability in order to deal with knowledge and inference when information is uncertain. Ideally, a participant should already have basic knowledge of propositional and predicate logic (also called first-order logic), but in any case detailed reading instructions for propositional and predicate logic will be available at latest by the beginning of the autumn semester 2019.

On the day of the meeting for this module the syntax and semantics of propositional and (especially) predicate logic will be generalized in order to allow one to speak about probabilities of truth/falsity and about inference when information is uncertain. Many (possibly all) practical formalisms within statistical relational learning – a major paradigm within modern AI – can be seen as special cases of the general concepts studied in this module. Major results about algorithmic decidability and efficiency for logical formalisms will also be treated.