AI-Track

The WASP Graduate School AI-track provides a unique opportunity for students who are dedicated to achieve international research excellence with industrial relevance. The WASP AI-track offers graduate students unique courses, designed by leading researchers – in addition to the courses available in the PhD programs at the partner universities.

The graduate school aims to establish a strong multi-disciplinary and international professional network between PhD-students, researchers and industries, and to help creating new research collaborations, both within academia and with industry.

AI-track Mandatory Courses

There are five mandatory core courses in the WASP AI-track.

The course is organized in three modules.

In the first course module, we aim to ensure that all students understand the basic concepts and tools in deep learning.

The second course module addresses that learning from data is becoming increasingly important in many different engineering fields. Models for learning often rely heavily on optimization; training a machine is often equivalent solving a specific optimization problem. These problems are typically of large-scale. In the second module, we will learn how to solve such problems efficiently.

The third course module contains research-oriented topics, knowledge of which will be useful in various PhD projects within WASP.

The course is organized in three modules.

In the first module of the course, we will parse-out the finer details of how graphs and their combinatorial properties can be used to encode probabilistic as well as causal relationships between random variables. Topics to be discussed include probabilistic graphical models, causal models, interventional distributions and structural learning algorithms.

In the second course module we will introduce some of the most widely used approximate inference algorithms, including Markov chain Monte Carlo, approximate message-passing, and variational inference, with a particular emphasis on inference in probabilistic graphical models.

The third module addresses formal logic, a powerful tool for knowledge representation and inference when information is certain. But often information is uncertain. This module combines logic with probability in order to deal with knowledge and inference when information is uncertain. Ideally, a participant should already have basic knowledge of propositional and predicate logic (also called first-order logic),

During a two-day meeting the course adressess the following three topics:

Responsible Design of Intelligent Systems: understand Responsible AI as part of complex socio-technical systems, Ethical Machines and Human-Agent Interaction.

The content of the course includes: ethics in practice – ethical issues in different applications, ethics by design – computational reasoning models for ethical deliberation, explainable AI – mathematical principles and computational approaches, and analysis and modelling of interaction constraints and norms.

In the first course module, we aim to ensure that all students master the basic mathematical tools (statistical framework, optimization, concentration) that constitute the foundations of the theory of Machine Learning.

The second course module applies the tools introduced in the first module to recent solutions for supervised and unsupervised learning problems (SVM, Kernel methods, Deep learning, as well as clustering and cluster validation).

The third course module contains an exhaustive introduction of theoretical and practical aspects of reinforcement learning (MDP, dynamic programming, Q-learning, policy-gradient, learning with function approximation, and recent Deep RL algorithms).

The course is given in three modules. In addition to academic lectures there is invited guest speakers from industry.

Module 1 – Introduction to Data Science: Introduction to fault-tolerant distributed file systems and computing.

The whole data science process illustrated with industrial case-studies. Practical introduction to scalable data processing to ingest, extract, load, transform, and explore (un)structured datasets. Scalable machine learning pipelines to model, train/fit, validate, select, tune, test and predict or estimate in an unsupervised and a supervised setting using nonparametric and partitioning methods such as random forests. Introduction to distributed vertex-programming.

Module 2 – Distributed Deep Learning: Introduction to the theory and implementation of distributed deep learning.

Classification and regression using generalised linear models, including different learning, regularization, and hyperparameters tuning techniques. The feedforward deep network as a fundamental network, and the advanced techniques to overcome its main challenges, such as overfitting, vanishing/exploding gradient, and training speed. Various deep neural networks for various kinds of data. For example, the CNN for scaling up neural networks to process large images, RNN to scale up deep neural models to long temporal sequences, and autoencoder and GANs.

Module 3 – Decision-making with Scalable Algorithms

Theoretical foundations of distributed systems and analysis of their scalable algorithms for sorting, joining, streaming, sketching, optimising and computing in numerical linear algebra with applications in scalable machine learning pipelines for typical decision problems (eg. prediction, A/B testing, anomaly detection) with various types of data (eg. time-indexed, space-time-indexed and network-indexed). Privacy-aware decisions with sanitized (cleaned, imputed, anonymised) datasets and datastreams. Practical applications of these algorithms on real-world examples (eg. mobility, social media, machine sensors and logs). Illustration via industrial use-cases. The first course module, we aim to ensure that all students understand the basic concepts and tools in deep learning.

AI-Track Elective Courses

How to give a machine a sense of geometry? There are two aspects of what a sense is: technical tool and ability to learn to use it. This learning ability is essential. For example we are born with technical ability to detect smells and through our lives we develop it, depending on needs and environment around us. In this course the technical tool we introduce to describe geometry is based on homology. The main aim of the course is to explain how versatile this tool is and how to use this versatility to give a machine ability to learn to sense geometry.

Technical tool
Homology, the central theme of the 20th century geometry, has been particularly useful for studying spaces with controllable cell decompositions such as Grassmann varieties. During the last decade there has been an explosion of applications ranging from neuroscience to vehicle tracking, protein structure analysis and the nano characterization of materials, testifying to the usefulness of homology to describe also spaces related to data sets. One might ask: why homology? Often due to heterogeneity or the presence of noise, it is very hard to understand our data. In these cases rather than trying to fit the data with complicated models a good strategy is to first investigate shape properties of such data. Here homology comes into play.

Learning
We explain how to use homology to convert geometry of datasets into features suitable for
statistical analysis and machine learning. It is a process that translates spacial and geometrical information into information that can be analysed through more basic operations such as counting and integration. Furthermore we provide an entire space of such translations.
Learning how to choose an appropriate translation in this space can be done in the spirit of machine learning.

Understanding human language is one of the central goals in Artificial Intelligence. Deep learning approaches have revolutionized the field, and define the state of the art for many different natural language processing (NLP) tasks.

However, natural language data exhibit a number of peculiarities that make them more challenging to work with than many other types of data commonly encountered in machine learning: natural language is discrete, structured, and highly ambiguous.

The goal of the course is to give participants the theoretical knowledge and practical experience required to use state-of-the-art natural language processing in their own research – for example, to integrate existing software components for natural language understanding into an autonomous system, or to understand, implement, and evaluate a cutting-edge deep learning architecture based on its description in a research article.

Vision is the human sense that requires the largest cortical capacity for extracting information from the environment. The complexity of vision is at a level that engineering approaches have only achieved partial success during the past half decade and it is only recently and with the help of Deep Learning that performance of computer vision algorithms excels human performance. A fundamental concept within vision is the representation in terms of features – but what is a feature and how did its role change in the transition from engineered systems to learning-based systems?

The goal of the course’s three modules is to give participants the theoretical knowledge and practical experience required to use state-of-the-art computer vision methods in their own research – for example, to integrate existing software components for machine perception into an autonomous system, or to understand and improve a cutting-edge deep learning computer vision algorithms.

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. View more
Cookies settings
Accept
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active
The WASP website wasp-sweden.org uses cookies. Cookies are small text files that are stored on a visitor’s computer and can be used to follow the visitor’s actions on the website. There are two types of cookie:
  • permanent cookies, which remain on a visitor’s computer for a certain, pre-determined duration,
  • session cookies, which are stored temporarily in the computer memory during the period under which a visitor views the website. Session cookies disappear when the visitor closes the web browser.
Permanent cookies are used to store any personal settings that are used. If you do not want cookies to be used, you can switch them off in the security settings of the web browser. It is also possible to set the security of the web browser such that the computer asks you each time a website wants to store a cookie on your computer. The web browser can also delete previously stored cookies: the help function for the web browser contains more information about this. The Swedish Post and Telecom Authority is the supervisory authority in this field. It provides further information about cookies on its website, www.pts.se.
Save settings
Cookies settings