The focus of this cluster is Machine Learning (ML), Deep Learning (DL) and other AI, where the latter in particular includes eXplainable AI (XAI).
Machine Learning, Deep Learning and other AI
During the first years AI/MLX will focus on the following scientific areas:
Representation learning and grounding: All ML algorithms depend on data representation – efficient and appropriate data representation can enable better understanding of parameters that affect the variations in the data. Representations can be tailored or learned and are dependent on the domain in which classification or prediction algorithms are deployed in. Recent techniques on representation learning consider unsupervised learning, deep learning, including advances in probabilistic models, auto-encoders, manifold learning and deep networks. One important research question is the ability of the model to achieve abstraction and invariance. In terms of abstraction, we mean the ability of the model to generate abstract concepts from more simple ones. This is then closely related to the amount of data one needs to learn the model or the representation. One of the focus areas will be development of models that encode various levels of abstraction and that are also based on multimodal input data. We will also strive to test this through invariance ability given that models encoding various abstraction levels are invariant to variations in input data. This is closely related to the symbol grounding problem, still largely unsolved in AI, robotics and NLP areas. We will also work on combining representation learning with reasoning techniques and common sense logic. For this reason, statistical-relation learning will also be a methodology to investigate.
Sequential decision-making and reinforcement learning: Traditional ML has focused on pattern mining. Executing actions in the real world requires decision making in real time and based on multimodal input. Reinforcement learning enables learning new tasks through experimentation, feedback and rewards. Most of the current reinforcement learning algorithms work well with discrete data while continuous data still represents a challenge. Together with appropriate representation learning, we will address reinforcement learning in continuous domains and with multimodal feedback. Also, RL can be used to make an agent or a system perform many different types of tasks, rather than specializing on a single one. Thus, multi task learning will be another challenge we want to tackle. Scaling will be a particular challenge here since it requires a large amount of training data. One interesting aspect is to assess to what extend we can use simulation frameworks to generate simulated data and then adapt the learned representation to real-world conditions.
Learning from small data sets, GANs and incremental learning: In most realistic systems, new data will become available as the system is used by new users and potentially in new situations. Thus, the input data can be used continuously to extend/update the existing model. This is also closely related to domains and problems where lots of data is not available from the beginning but the agent/system learns, like in reinforcement learning, to balance between exploration and exploitation. Methods for adding and merging the new data in the existing representation need further development in terms of multimodal inputs, combination of continuous and discrete data, as well as assessing the performance as the system is being updated given the new data. An important challenge is to be able to develop systems that can provide meaningful and calibrated notions of their uncertainty, explain the decisions in meaningful ways and also learn from negative examples. In addition, the systems need to be able to pursue long-term goals and also reason on which new data is needed to achieve these. In this respect, more recent methodologies such as adversarial methods and GANs more specifically are also of interest to address the data generation problem.
Multi-task and transfer learning: Transfer learning is the ability of a learning algorithm to exploit similarities between two different domains in which learning is occurring. The knowledge or the representation can be transferred between domains in order to speed up learning. Recent work develops frameworks where some auxiliary intermediate domain data is selected to bridge between the given source and target domains and then performs knowledge transfer along the bridge. An interesting question is the combination of transfer and deep learning so that we base learning on the separation of feature granularity and layers of granularity, thus transferring both features and instances by multi-task learning.
Other areas can be considered if there are strong candidates.
Danica Kragic, KTH, email@example.com
Program Management group
Fredrik Heintz, Linköping University
Amy Loutfi, Örebro University
Thomas Schön, Uppsala University
Helena Lindgren, Umeå University
Fredrik Kahl, Chalmers
Michael Felsberg, Linköping University