The WASP Graduate School offers a common set of courses for all PhD students making it possible to tailor individual curricula from a variety of mandatory, foundational and elective courses.
PhD students are encouraged to widen their perspectives by attending courses outside of their primary field of research, thus contributing to the interdisciplinary approach of the WASP program. In order to complete the WASP Graduate School students are required to take WASP courses corresponding to at least 27 hp, including the mandatory course and at least 2 out of the 3 foundational courses.
Mandatory Courses
The course introduces the fundamental aspects of AI ethics by providing a holistic multidisciplinary view of the discipline. The course structure is such as to introduce students to the impact AI systems have on societies and individuals and ongoing state-of-the-art discussions related to Ethical, Legal and Societal aspects of AI. This course aims to foster critical discussion of where accountability and responsibility lie for ethical, legal, and social impacts of AI systems, considering decision points throughout the development and deployment pipeline. Students will be introduced to socio-technical approaches for the governance, monitoring and control of intelligent systems as well as tools for incorporating constraints into intelligent system design and will apply these skills on a simulated responsible AI design problem.
Foundational Courses
Autonomous systems are systems that are designed to work in their target environment without, or with limited, human intervention. The objective with this course is to give a broad understanding of the wide area of autonomous systems and the foundational knowledge in the topic areas required to understand and develop autonomous systems. The course covers topics from sensing and perception to control and decision making. Learning outcomes include the notion of autonomy and to understand the different components that typically are found in these systems. Other learning outcomes are fundamental techniques for autonomous systems from the areas of automatic control, computer vision, sensor fusion, learning, and software. The examination consists of exercises, labs, and projects. The participants get the opportunity to network and learn from each other during course-wide face-to-face meetings.
This course is under development.
Software engineering is a broad set of techniques that aim to analyze user requirements, and then facilitate the design, building, and testing of software applications. This course aims to give and overview and insight into real world software engineering techniques, combined with a thorough overview of the Cloud technologies and infrastructures that underpin most modern applications and their development.
Software Engineering:
- (Learning outcomes) Fundamentals of version control systems, test automation, and refactoring.
- (Course content) Introduction to versioning, testing, and refactoring through close interactions with the students and questions related to their own software context.
Cloud Computing:
- (Learning outcomes) This course intends to provide a basic knowledge of Cloud Computing terminology, infrastructure and architectures. You will understand how to process data in a distributed environment (in particular, using Hadoop and Spark). You will learn some of the economic considerations related to Cloud computing and understand modern Cloud technologies including Serverless and Edge computing. You will have a basic understanding of DevOps in the Cloud and hear experiences in Cloud from Industry.
- (Course content) The course will include interactive lectures, industry keynotes, and practical sessions using a real Cloud system – OpenStack running in the Ericsson Research Data Center.
Elective Courses
Natural Language Processing (NLP) develops methods for making human language accessible to computers. The goal of this course is to provide students with a theoretical understanding of and practical experience with the advanced algorithms that power modern NLP. The course focuses on methods based on deep neural networks.
The course covers state-of-the-art deep learning architectures for three types of NLP applications: categorization, structured prediction, and text generation. Each module consists of video lectures, interactive sessions, and programming assignments. The final part of the course is a project where students apply their learning to their own field of research.
Course plan:
The purpose of this course is to give a thorough introduction to deep machine learning, also known as deep learning or deep neural networks. Over the last few years, deep machine learning has dramatically changed the state-of-the-art performance in various fields including speech-recognition, computer vision and reinforcement learning (used, e.g., to learn how to play Go).
The first module starts with the fundamental concepts of supervised machine learning, deep learning and convolutional neural networks (CNNs). This includes the main components in feed-forward and CNNs, commonly used loss functions, training paradigms, architectures and strategies to help the network generalize well to unseen data.
The second module covers algorithms used for training of neural networks such as stochastic gradient descent and Adam. We will introduce the methods and discuss their implicit regularizing properties in connection to generalization of neural networks in the overparameterized setting.
The last module of the course presents the theory behind some advanced topics of deep learning research, namely recently developed methods for 1) equipping discriminative deep networks with estimates of their uncertainty and 2) different types of deep generative models. The students’ knowledge will be examined with corresponding hand-in assignments.
Probabilistic graphical models play a central role in modern statistics, machine learning, and artificial intelligence. Within these contexts, researchers and practitioners alike are often interested in modeling the (conditional) independencies among variables within some complex system. Graphical models address this problem combinatorically: A probabilistic graphical model associates jointly distributed random variables in the system with the nodes of a graph and then encodes conditional independence relations entailed by the joint distribution in the edge structure of the graph. Consequently, the combinatorial properties of the graph play a role in our understanding of the model and the associated inference problems. We will see in this course how the graph structure provides the scientist with (1) a transparent and tractable representation of relational information encoded by the model, (2) effective approaches to exact and approximate probabilistic inference, and (3) data-driven techniques for learning graphical representatives of relational information, such as conditional independence structure or cause-effect relationships.
This course is under development and the information below will be updated.
Human perception relies to a large extent on vision, and we experience our vision-based understanding of the world as something intuitive, natural, and simple. The key to our visual perception is a feature abstraction pipeline that starts already in the retina. Thanks to this learned feature representation we are able to recognize of objects and scenes from just a few examples, do visual 3D navigation and manipulation, and understand poses and gestures in real-time. Modern machine learning has brought similar capabilities to computer vision and the key to this progress are the internal feature representations that are learned from generic or problem specific datasets to solve a wide range of classification and regression problems.
This course introduces the basic concepts and mathematical tools that constitute the foundations of the theory of machine learning. Learning theory concerns questions such as: What kinds of guarantees can we prove about practical machine learning methods, and can we design algorithms achieving desired guarantees? Is Occam’s razor a good idea and what does that even mean? What can we say about the fundamental inherent difficulty of different types of learning problems?
To answer these questions, the course discusses fundamental concepts such as probably approximately correct (PAC) learnability, empirical risk minimization and VC theory, and then covers the main ML subfields, including supervised learning (linear classification and regression, SVM, and deep learning), unsupervised learning (clustering), and reinforcement learning.
The examination consists of two homework assignments and a take-home exam.
Course plan:
This course is under development.
The course is given in three modules. In addition to lectures by the organizers there will be invited guest speakers from industry.
Module 1 – Introduction to Data Science: Introduction to fault-tolerant distributed file systems and computing.
The whole data science process illustrated with industrial case-studies. Practical introduction to scalable data processing to ingest, extract, load, transform, and explore (un)structured datasets. Scalable machine learning pipelines to model, train/fit, validate, select, tune, test and predict or estimate in an unsupervised and a supervised setting using nonparametric and partitioning methods such as random forests. Introduction to distributed vertex-programming.
Module 2 – Distributed Deep Learning: Introduction to the theory and implementation of distributed deep learning.
Classification and regression using generalised linear models, including different learning, regularization, and hyperparameters tuning techniques. The feedforward deep network as a fundamental network, and the advanced techniques to overcome its main challenges, such as overfitting, vanishing/exploding gradient, and training speed. Various deep neural networks for various kinds of data. For example, the CNN for scaling up neural networks to process large images, RNN to scale up deep neural models to long temporal sequences, and autoencoder and GANs. In this course module, we aim to ensure that all students understand the basic concepts and tools in distributed deep learning.
Module 3 – Decision-making with Scalable Algorithms
Theoretical foundations of distributed systems and analysis of their scalable algorithms for sorting, joining, streaming, sketching, optimising and computing in numerical linear algebra with applications in scalable machine learning pipelines for typical decision problems (eg. prediction, A/B testing, anomaly detection) with various types of data (eg. time-indexed, space-time-indexed and network-indexed). Privacy-aware decisions with sanitized (cleaned, imputed, anonymised) datasets and datastreams. Practical applications of these algorithms on real-world examples (eg. mobility, social media, machine sensors and logs). Illustration via industrial use-cases.
How to give a machine a sense of geometry? There are two aspects of what a sense is: a technical tool and an ability to learn to use it. This learning ability is essential. For example we are born with the technical ability to detect smells and throughout our lives we use and develop this sense, depending on needs and the environment around us. In this course the technical tool we introduce to describe geometry is based on homology. The main aim of the course is to explain how versatile this tool is and how to use this versatility to give a machine the ability to learn to sense geometry.
Technical tool. Homology, the central theme of the 20th century geometry, has been particularly useful for studying spaces with controllable cell decompositions such as Grassmann varieties. During the last decade there has been an explosion of applications ranging from neuroscience to vehicle tracking, protein structure analysis and the nano characterization of materials, testifying to the usefulness of homology to describe also spaces related to data sets. One might ask: why homology? Often due to heterogeneity or the presence of noise, it is very hard to understand our data. In these cases rather than trying to fit the data with complicated models a good strategy is to first investigate the shape properties of such data. Here homology comes into play.
Learning. We explain how to use homology to convert geometry of datasets into features suitable for statistical analysis and machine learning. It is a process that translates spacial and geometrical information into information that can be analysed through more basic operations such as counting and integration. Furthermore we provide an entire space of such translations.
The objective with this course is to get experience from working on a practical problem within the areas of autonomous systems and software. Important learning outcomes are to use technologies relevant for prototyping, to get experience from working in the form of a group including different competences, and to collaborate with an external industrial partner. The course is performed as independent project groups with 5-7 students. The result from the technical work in these groups are presented in the form of a report and presentations.
Introductory courses
A higher level artificial intelligence should be able to express knowledge and reason about it, e.g. draw conclusions from facts or from assumptions. This can be achieved by using some kind of language. Formal (or symbolic) languages have been constructed partly for the sake of turning the “art of making conclusions”, and related problems, into a mathematically precise activity, and partly to facilitate the automation of reasoning. The formal languages of propositional logic and first-order logic are the cornerstones of the formal approaches to knowledge representation and deduction, and the multitude of other formal logical languages that exist are modifications or extensions of these. This course is an introduction to the key notions and results of propositional and first-order logic.
This is an introductory course targeting students with a less strong background in mathematics. The student should get the theoretical mathematical background knowledge expected for the machine learning parts of all WASP-courses