Wallenberg AI, Autonomous Systems and Software Program (WASP) is Sweden’s largest individual research program ever. WASP provides a platform for academic research and education, fostering interaction with Sweden’s leading technology companies. Part of the initiative in AI within WASP deals with increasing our understanding of fundamental mathematical principles behind AI.
We now offer up to 18 university PhD positions at seven university sites, with focus on mathematics behind AI.
For further descriptions of positions and to apply, follow the links to the coordinating universities. The positions included in this call are at Chalmers, KTH, Linköping university, Lund university, Stockholm university, Umeå university and Uppsala university. Please note different final dates for applications.
Optimization algorithms for machine learning:
Optimization methods have played a major role in modern machine learning: in particular, many of the recent successes rest on the development of highly scalable stochastic optimization methods such as stochastic gradient descent (SGD) and accelerated versions of it. The goal of this project is to explore further ideas around these concepts and apply them to fundamental ML problems such as matrix completion and optimal transport.
Four positions in mathematics for AI within the following areas/projects:
Learning with noisy labels, Deep learning and statistical model choice, Deepest learning using stochastic partial differential equations, Quantum deep learning and renormalization: A group-theoretic approach to hierarchical feature representations.
Understanding deep learning: From theory to algorithms.
The purpose of this project is to increase our theoretical understanding of deep neural networks. This will be done by relying on tools of information theory and focusing on speciﬁc tasks that are relevant to computer vision.
KTH Royal Institute of Technology
Algebraic Topology and Mathematical Statistics:
AI is often identified with the ability to simplify the data while retaining its information content relevant for the decision making. Homological invariants are examples of simplifying tools particularly suitable for encoding shape. The challenge is to adopt homological information for statistical analysis.
Information theory for AI and machine learning:
The project studies the interaction between information theory and mathematical methods for learning and statistical inference, with applications to artificial intelligence and machine learning, especially deep learning. New theoretical principles and guidelines for algorithm development and mathematical performance analysis will be developed, with information theory as the scientific basis.
Project concerns studies of combinatorial structures, such as directed graphs and convex polytopes, and studies on combinatorial notions of causality. The candidate should have knowledge of and an interest in combinatorics and will be part of the combinatorics group.
Going Beyond State of the Art in Integer Linear Programming by Using Conflict Driven Search:
In this project we want to develop mathematical methods for efficient 0-1 integer linear programming (a.k.a. pseudo-Boolean SAT solving) based on the methods for conflict-driven clause learning algorithms used for SAT solving, and then implement these methods of reasoning in new solvers. Our ultimate goal is to build a proof-of-concept engine for pseudo-Boolean optimization and integer linear programming, based on new, original ideas, that would have the potential to be competitive in the relevant domains with packages such as SCIP or even CPLEX and Gurobi.
Applied and computational mathematics:
Generative models, such as variational auto-encoders and generative adversarial networks, are being developed with the aim to produce samples of complex data structures such as images, natural language, voice and sound, video, times series, financial data, etc. This project will study mathematical aspects of generative models, related to large deviations theory and stochastic computational methods.
Up to two positions in computer science:
Constraint Satisfaction Problems (CSP) are a well-known class of computational problems and many problems encountered in AI (but also in other fields of computer science, mathematics, and elsewhere) are instances of the CSP. The computational complexity of CSP problems has been intensively studied and several breakthrough results have recently been presented. These results are typically based on utilising methods from universal algebra and mathematical logic. This project aims at developing new mathematical methods for analysing the computational complexity of CSPs.
Optimization methods for matrix factorization, dimensionality reduction and deep networks:
In recent years there has been a dramatic increase in performance of recognition, classification and segmentation systems due to deep learning approaches, in particular convolutional nets. Furthermore, training with these models requires solving very large scale non-convex optimization problems and it is unclear under what conditions these can be reliably solved. In this project we will study mathematical optimization methods for learning and dimensionality reduction approaches related to matrix factorization. We will develop effective and reliable algorithms that scale well beyond today’s standard and apply these to computer vision applications.
Note that only the project ”Optimization methods for matrix factorization, dimensionality reduction and deep networks” is a WASP-position in this ad.
Modelling and analysis of multi-scale neuronal networks
The project is within the area of mathematics at the intersection of probability theory, analysis and dynamical systems. The goal is the development and study of the dynamical systems which model processes compatible with brain functions, as learning, memory, cognition, decision making. The project will focus on modelling and analysis of multi-scale complex systems which combine molecular/particle level with macro-dynamics.
Combining quantitative and qualitative analysis of multi-player games:
The project aims to explore the mathematical aspects of a general framework for modelling of intelligent multi-agent systems with both quantitative and qualitative goals and constraints. The main objectives of the project are to develop and apply mathematical methods for analysing dynamic multi-stage games modelled by concurrent game models with payoffs and guards.
Development of mathematical methods and tools for investigation and future development of deep neural networks:
The project will in particular investigate a specific type of deep neural network which is among the most successful and can be related to evolution equations. The research project involves mathematical and algorithmic development as well as practical application and testing using state of the art software.
New mathematical knowledge about the intrinsic behaviour of neural networks when subjected to noise:
One of the core aims is to develop the mathematical foundation for understanding how the distribution of information carrying capacity for a network’s internal weights can be utilized in order to optimize its performance.
Robust learning of geometric equivariances:
Deep convolutional neural networks (CNNs) are showing impressive results in a variety of tasks in image data analysis and understanding. This project explores ways to further improve their performance by representing and exploiting their properties, such as known equivariances, as well as learning equivariances from data. The project builds on, and extends recent very promising works on Geometric deep learning and combines this with Manifold learning, to produce truly learned equivariances without the need for engineered solutions.