Photo by Hasan Albari from Pexels

During 2020, several WASP-funded PhD students and senior researchers have received prestigious awards for their scientific findings. We let the authors describe their awarded papers, read their summaries below.

Did we miss out anyone? Let us know so that we can add you to the list:


“Style-controllable speech-driven gesture synthesis using normalising flows”

Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow, KTH Royal Institute of Technology

Honourable mention at the EUROGRAPHICS 2020 conference.

Summary: When you and I speak, we tend to make gestures with our hands, arms, and body that go together with our speech. This is part of our body language. Robots and virtual characters in games or GCI movies also need to be able to move and gesture when they speak, in order to be relatable and believable. This paper described a new AI method for creating full-body gestures based on speech recordings. The results look much more natural than what previously has been possible and – just like a real person – no two gestures are ever the same. The style of the gestures can also be adjusted to create gestures that are fast or slow, big or small, depending on what the situation calls for. The code and data used for the research is available on the Internet, and there has been lots of interest in the new method from gaming and film companies, as well as among researchers and hobbyists all around the world.

”Transparent IFC Enforcement: Possibility and (In)Efficiency Results”

Maximilian Algehed, Chalmers University of Technology, and Cormac Flanagan, University of California in Santa Cruz

Distinguished Paper at the 2020 IEEE Computer Security Foundations Symposium (CSF).

Summary: The holy grail of language-based security is a programming language agnostic mechanism for securing software, without incurring large overheads or ruining secure functionality. This new paper proves once and for all that no such holy grail exists, a result that earned a distinguished paper award at the Computer Security Foundations Symposium (CSF) this year.

The paper can be found online:

“ꟻLIP: A Difference Evaluator for Alternating Images”

Pontus Andersson (NVIDIA / Lund University), Jim Nilsson (NVIDIA), Tomas Akenine-Möller (NVIDIA), Kalle Åström (Lund University), Magnus Oskarsson (Lund University), Mark D. Fairchild (Rochester Institute of Technology)

Awarded for the 3rd best paper at the High-Performance Graphics 2020 conference (HPG 2020).

Summary: In computer graphics, it is often important to be able to compare two images and say where they differ and how large the differences are. This is a crucial component of, for example, algorithm development, where the output images of different algorithms are compared to a ground truth reference image and the developer wants to know which output is most similar to the reference or, put differently, which one shows the least error. ꟻLIP is a difference evaluator which outputs difference maps, i.e., images that, in each pixel, indicate the difference (or error) in that pixel. Specifically, the output of our algorithm mimics the differences an observer would perceive, at a given distance from the display, while alternating between the two images. With its availability and correspondence with human perception, ꟻLIP has already seen valuable use in research and has been an appreciated tool when comparing images.

Paper, code, and supplementary material for ꟻLIP is available at:

”Agents are Dead. Long live Agents!”

Virginia Dignum, Frank Dignum

Best Paper Award, Blue Sky track, AAMAS 2020

Summary: In recent years, the future of agent research has often been discussed. Most prominent is the issue whether agents should be seen as a conceptual framework or as a software development paradigm. At the same time, developments on AI seem to have taken the field into a new direction. In this paper we argue that in order for agents research to create added value for actual, real problems in the world we need to reconsider possible agent architectures and their strengths and weaknesses, their overlaps and commonalities. Finally we present a first sketch of an architecture for such agents.


“MSE-Optimal Measurement Dimension Reduction in Gaussian Filtering”

Marcus Greiff, Anders Robertsson and Karl Berntorp, Lund university

Best Student Paper Award at IEEE CCTA, 2020.

Summary: This paper concerns the problem of state estimation (inferring for instance the position and velocity of a car) with a relative abundance of measurement information. In applications such as satellite-positioning, we will have a large number of measurements taken from multiple satellite systems which may be problematic to incorporate in real-time due to computational limitations. We describe a method of combining the available measurements so as to minimally worsen the estimate performance when using a reduced number of measurements and applying certain classes of filters in the estimation problem. 

“Tactical Decision-Making in Autonomous Driving by Reinforcement Learning with Uncertainty Estimation”

Carl-Johan Hoel, Krister Wolff, Leo Laine, Chalmers University of Technology
Best student paper award at IEEE Intelligent Vehicles Symposium (IV2020).
Summary: Reinforcement learning (RL) can be used to create a tactical decision-making agent for autonomous driving. However, previous approaches only output decisions and do not provide information about the agent’s confidence in the recommended actions. This paper investigates how a Bayesian RL technique, based on an ensemble of neural networks with additional randomized prior functions (RPF), can be used to estimate the uncertainty of decisions in autonomous driving. A method for classifying whether or not an action should be considered safe is also introduced. The performance of the ensemble RPF method is evaluated by training an agent on a highway driving scenario. It is shown that the trained agent can estimate the uncertainty of its decisions and indicate an unacceptable level when the agent faces a situation that is far from the training distribution. Furthermore, within the training distribution, the ensemble RPF agent outperforms a standard Deep Q-Network agent. In this study, the estimated uncertainty is used to choose safe actions in unknown situations. However, the uncertainty information could also be used to identify situations that should be added to the training process.

“Human-Centered Design for Safe Teleoperation of Connected Vehicles”

Frank J. Jiang, KTH Royal Institute of Technology, Yulong Gao, KTH Royal Institute of Technology and Nanyang Technical University of Singapore, Lihua Xie, Nanyang Technical University of Singapore, Karl H. Johansson, KTH Royal Institute of Technology

Best Paper Award at the 3rd IFAC Workshop on Cyber-Physical & Human Systems, 2020.

Summary: Remote teleoperation is a paradigm shift that companies ranging from Waymo to Einride have developed to handle either unexpected scenarios or scenarios that are hard to handle with automation. This paradigm shift will allow human operators to each supervise multiple vehicles at the same time, leading to important economic benefits that will help our transportation system become an automated one. One major concern regarding remote teleoperation is safety. Remote teleoperation is sensitive to problems in the wireless connection and often has embodiment issues where the human operator is unable to safely operate the vehicle due to differences in the remote driving interface and driving the vehicle in-person. In our work, we address these safety issues witha teleoperation system that is designed to be easy for humans to use and guarantees the safety of the vehicle, even in the presence of problems in the wireless communication and unintentional human error.

“Let’s Face It: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings”

Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, and Jonas Beskow, KTH Royal Institute of Technology

Best Paper Award at the ACM International Conference on Intelligent Virtual Agents (IVA ‘20).

Summary: If you ever have spoken to a robot, or a chatbot on your smartphone or on a website, you know how awkward it can be. If they try to put a friendly, computer-animated face on top, things just get worse. The whole experience is stiff and unnatural, and nothing like talking to a real person. One reason for this is that the machines don’t pay attention to anything other than the words you say, and completely ignore your body language.

This paper was created to change the above state of affairs. It presented a new AI system for generating the body language of a virtual avatar – specifically its head nods and facial expressions – in response to the speech, head motion, and facial expressions of the person the AI was talking to. This was found to lead to virtual avatars with more appropriate behaviour that humans liked better. By making the code and data from the research publicly available, anyone can build on this work and apply it to make the next generation of robots more natural to interact with.

For more information about the paper and a video introduction, please see:

“Gesticulator: A framework for semantically-aware speech-driven gesture generation”

Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexanderson, Iolanda Leite, and Hedvig Kjellström, KTH Royal Institute of Technology

Best Paper Award at the ACM International Conference on Multimodal Interaction (ICMI ‘20).

Summary: Human body language is much richer than you may think. Think about the hand, arm, and body motions we make when we talk: Some of these motions mostly relate to the rhythm of the speech, and may for example help put emphasis on a certain word. Other motions we make relate more to the meaning of what we are saying. For example, we may nod or shake our heads for yes or no, or shrug our shoulders when we do not know. In fact, many gestures we make follow both the rhythm and the meaning of our speech.

Unfortunately, our best AI methods for creating gestures in a computer have so far been “either or” – either they adhere to the rhythm of the speech, or they pay attention to what the words we are saying mean. This paper created the first modern AI system that was able to take both speech rhythm and meaning into account at the same time. As a result, it can generate a much broader range of motion than was possible before. This new research direction will enable more relatable and lifelike characters in computer games and VR.

For more information about the paper and a video introduction, please see:

“Teaching Robots to Perceive Time: A Twofold Learning Approach”

Inês Lourenço, KTH Royal Institute of Technology, Rodrigo Ventura, Technical University of Lisbon and Bo Wahlberg, KTH Royal Institute of Technology

Best Paper Award at the Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), 2020.

Summary: This work considers the problem of how time perception mechanisms, which encode the phenomenological experience of time by an individual, can be integrated into robots. Our framework replicates neural mechanisms involved in time perception, allowing robots to take a step towards temporal cognition.
To video recording of presentation

“Testing Self-Adaptive Software with Probabilistic Guarantees on Performance Metrics”

Claudio Mandrioli and Martina Maggio, Lund University

ACM Distinguished Paper Award at ESEC/FSE, 2020.

Summary: Modern software systems are more and more pervasive and find themselves in the need to adapt to uncertain and varying environments. Testing is one of the most used techniques to verify that a software does what it has been designed for. But how can we guarantee that a software will adapt and perform well also in situations that have not been forecast during its design?
In this paper we proposed a new way to use the random generation of test cases and provide statistical guarantees of the software performance in unforeseen scenarios. Traditional tools from statistics have proven inadequate for such a task, hence we imported the Scenario Theory from the field of control systems to fill this technological gap.

“Constraint-Based Software Diversification for Efficient Mitigation of Code-Reuse Attacks”

Rodothea Myrsini Tsoupidi, Roberto Castañeda Lozano and Benoit Baudry, KTH Royal Institute of Technology

Best Paper Award at the International Conference on Principles and Practice of Constraint Programming, 2020.

Summary: Diversity contributes to the robustness, health and resilience of biological systems, social systems, human organizations, economical systems or immune systems. Yet, software systems are homogeneous. Fierce technical and business competitions support the emergence of a dominant solution, creating what is known as ‘software monocultures’. While this is good for maintenance of interoperability, it is also a risk for security and reliability of software systems. In this work, we explore a new technique that can transform one program into many different ones. The key scientific challenge we address is as follows: all the programs should perform the same operation, yet, they should all do it in a different way.

“Hybrid Dynamic Programming for Simultaneous Coalition Structure Generation and Assignment”

Fredrik Präntare and Fredrik Heintz, Linköping University

Best Student Paper Award at the 23rd International Conference on Principles and Practice of Multi-Agent Systems (PRIMA2020).

Summary: We develop, analyze and benchmark two algorithms for simultaneous coalition structure generation and assignment—a combinatorial optimization problem in which indivisible elements (e.g., goods/courses) have to be distributed in bundles (i.e., partitioned) among a set of other elements (e.g., buyers/students) to maximize aggregated, total value. In all of our benchmarks, our experiments show that our main contribution—a hybrid dynamic programming algorithm—outperforms state-of-the-art and the industry-grade solver CPLEX by several orders of magnitude. For example, when solving one of our most difficult problem sets, it finds optimum in roughly 0.1% of the time that the current best methods need—in other words, the state-of-the-art is approximately 1000 times slower.

”Challenges and opportunities in open data collaboration – a focus group study”

Per Runeson and Thomas Olsson, Lund University.

Best Paper Award at the 46th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), IEEE, 2020.

Summary: Machine learning applications need a lot of data for training, which gives the big players a competitive advantage. Sharing data, more or less openly, similar to open source software, may be an option to get access to more data. We organized five workshops with 27 practitioners from 22 companies, public organizations, and research institutes to explore the conditions for such data sharing. In the discussions, we observed a general interest in the subject, both from private companies and public authorities. Observed challenges include licensing, liability, trust, and privacy issues, while the opportunities include reduced costs for annotation, benefits of experience sharing, as well as reduced risks of lock-in and protectionist approaches. The work continues by studying emerging data collaboration projects, called data ecosystems.

The paper can be found [online]


Published: January 21st, 2021

Latest news