Accurate localization anywhere and anytime – of vehicles, robots, humans, and gadgets in both the absolute and relative sense – is a fundamental component in achieving high levels of autonomy. The research challenge is to provide scalable, available and reliable smart localization technology needed to enable future intelligent and autonomous systems.

Photo: David Brohede

Vision of the cluster


Smart localization is crucial for all future autonomous systems, including autonomous functions and self-driving vehicles, indoor positioning systems, Internet of Things, as well as asset management and other industrial applications. The challenges in these applications include high complexity, cost reductions, distributed cloud computations, as well as high demands on position accuracy and integrity. Scalability is ubiquitous in all autonomous systems, in particular cloud-based localization systems where many trade-offs are needed in the system optimization, and where the future number of users and nodes in many applications are more or less unlimited.


Our domain expertize offers a toolbox of algorithms and solutions for localization challenges, and our research aims at analyzing their efficiency in terms of accuracy and integrity for different sensor configurations (cost perspective) and implementation constraints (distributed cloud, time delays, power constraints, etc.). The span of these challenges goes all the way from the very high integrity demands of aircraft navigation to the low integrity demands of personal devices, from the high accuracy of self-driving cars, via the law-enforced accuracy of cellular devices, to the user-defined acceptable accuracy needs of VR-tools.

Connection to other WASP Clusters

One important application domain is of course autonomous systems in vehicles, such as self-driving cars and future safety systems, so ATS (Automated Transport Systems) is naturally connected. IPLVIAS includes many central sensor fusion concepts that also apply to localization. It is anticipated that many future localization systems are implemented in the cloud, or are cloud-assisted, so the Autonomous Cloud and Networks cluster is highly relevant. Situational awareness is crucial in all decision support systems, and particular in sensor rich environments the localization is a challenge, so there is a close connection to ICAASE.

A drone with a radar scanner developed by Recco is used to localise a missing person equipped with a Recco reflector.

Future Demonstrations

There are demonstration arenas suitable for localization at KTH, Lund and LiU, which all include well-defined indoor environment where ground truth reference systems are available. However, localization eventually needs to be evaluated in the real environment. Our industrial partners have access to suitable arenas in many different environments, ranging from vehicles operating underground to in the air. We have had five regular workshops since 2011 where we invite industry to participate in our presentations and see our live demos, a tradition we will continue under the WASP umbrella.

Main Research Questions

High-cost and high-accuracy localization in outdoor environments is today relying on GPS, and there is no support to monitor its reliability or backup when it fails to achieve availability. The current challenge lies in developing cost-efficient localization solutions for environments and situations where GPS based solutions are not sufficient. Our research will therefore focus on enabling localization solutions that fulfil the three following performance metrics:

  1. Scalability over geographical area and cost of deployment
  2. Availability over space and time, including an ability to adapt over space and time
  3. Reliable, providing trustworthy position information, not only concerning the position accuracy, but also an uncertainty description

We want to explore these areas using a sensor fusion and learning approach, where redundancy and large data sets are tools to achieve our goals. Cloud solutions including crowd sourcing concepts provide promising ways to access large amount of data from various sensor modalities. New sensor technologies and radio standards offer a steady stream of new sensor fusion challenges, and old paradigms have to be revised for each technology leap. Centralized off-line sensor fusion solutions provide upper bounds on performance, while distributed real-time implementations on incomplete data streams frame our research themes.

Our PhD/sub-projects all relate to these key questions, from both a methodology and an application oriented point of view.

Societal Impact and Areas of Applications

The listed research challenges have strong connections to classical automotive, manufacturing, and communications industry, as well as many new startups and SME companies. One main challenge for our industrial partners is to provide domain expertize which leads to relevant research goals, including specifications of feasible hardware, on-going standardization efforts, and performance needs for their future localization systems. Localization is also of industrial relevance for many other Swedish companies such as Volvo Cars, Volvo Trucks, Volvo CE, and Scania, which are developing autonomous vehicles. Moreover, as localization is a fundamental capability in the design of smart products and systems it is a enabling technology for hospitals, industry asset management, infrastructure for traffic, logistics, water, gas, electricity and building control, etc. Localization is also a hot topic in the gaming industry and AR and VR applications. Noteworthy is that we have a number of successful spin-offs in the area of localization, including Combain Mobile and MAPCI from Lund, Senionlab and NIRA Dynamics from Linköping, and the OpenShoe project and 13th Lab from KTH.


The supervisors of this cluster have organized workshops on localization since 2010:

1. 2010: Kickoff triple meeting: KTH R1 August 19, LiU August 27 morning and FOI August 27 afternoon

2. 2011: Lund University, January 11

3. 2013: Chalmers, September 25

4. 2014: Ericsson Studio, Kista, September 10

5. 2015: FOI Kista, March 5-6

6.  2017: Lund University April 26

7. 2018: Kolmården (Vildmarkshotellet and the Zoo test site), August 28-29

8. 2019: Gränsö Slott, September 3-4


Cluster coordinator

Fredrik Gustafsson, Linköping University


Henk Wymeersch, Chalmers

Joakim Jaldén, KTH

Patric Jensfelt, KTH

Isaac Skog, Linköping University

Gustaf Hendeby, Linköping University

Bo Bernhardsson, Lund University

Fredrik Tufvesson, Lund University

Kalle Åström, Lund University

Anders Robertsson, Lund University

Magnus Oscarsson, Lund University

Rickard Karlsson, Linköping University



5G mmWave SLAM for Vehicular Localization   

Yu Ge (Chalmers)
Supervisors: Henk Wymeersch, Lennart Svensson (Chalmers)

5G mmWave communication is very useful for localization and mapping, due to its geometric connection to the location of the user with respect to the base station. 5G mmWave signals from the base station can reach the user via multiple propagation paths. In this project, we are developing algorithms for solving the end-to-end problem, including four phases: downlink data transmission, multi-dimensional channel estimation, channel parameter clustering, and SLAM filter with a novel likelihood function.    

 The animation shows a single vehicle driving in a circle around a base station (BS). The environment has four walls, each corresponding to a “virtual anchor” (VA) shown in blue, as well as four small scattering points (SP, in red), which are only visible when the vehicle is nearby (shown with the yellow dome). As the vehicle moves, it determines its relation to locations of the virtual anchors (VA), the scatter points (SP), figuring out which measurements come from which objects, dealing with clutter measurements, and simultaneously tracking its own state. Once the environment is mapped, it can be used by other vehicles as a priori knowledge. 

Distributed autonomous systems for localization and mapping

Andreas Bergström (industrial PhD student), Ericsson and LiU
Supervisors: Fredrik Gustafsson LiU, Gustaf Hendeby LiU, Fredrik Gunnarsson Ericsson

Illustration of how a radio signal reaches
the receiver in different ways due to multipath effects caused by reflections in the walls.

The goal of this project is to develop autonomous functions for the joint problem of sensor management and aircraft control to relieve a human system operator. This will be achieved by exploring high level control of the platforms and their resources, e.g. sensors.

Today we have distributed radio systems everywhere, from global navigation systems (GNSS), to cellular radio systems and local area networks. Many positioning services are based on one or a combination of these, where the received radio signals at a mobile user are analyzed and used for localization.

The basic measurements are usually split into timing and signal strength measurements. In this project, we extend these approaches by studying the power delay profile (PDP), that provides a deeper understanding multipath phenomenon that are so hard to mitigate in classical methods.

Both timing and signal strength measurements can be derived from the PDP. However, we investigate the advantages of estimating the position from the full PDP, and relate this to a map of the environment to further understand the multipath effects, which in this way can be turned into an information source rather than a nuisance.

Management of Distributed Autonomous Systems

Per Boström (industrial PhD student), SAAB and LiU
Supervisors: Gustaf Hendeby LiU, Daniel Axehill LiU

Four WASP students demonstrating autonomous drone flight
in the Visionen test arena as part of a WASP project course. Photo: David Brohede

The goal of this project is to develop autonomous functions for the joint problem of sensor management and aircraft control to relieve a human system operator. This will be achieved by exploring high level control of the platforms and their resources, e.g. sensors.

The cognitive focus of pilots has shifted from controlling the aircraft to managing the sensor systems with the introduction of autonomous flight systems. One can compare with modern drones, which are very easy to control, but still the pan, tilt and zoom of the onboard camera is a challenging task. 

Further, planning the flight to maximize the information from e.g. a camera poses its own challenge. It requires understanding of how to manage available sensors to achieve the goal at hand; proactively take limitations in the environment into consideration; react to changes; learn from experience; and collaborate as necessary.

Localization using large arrays of inertial sensors

Håkan Carlsson, KTH
Supervisors: Joakim Jalden KTH, Isaac Skog LiU

This project considers the use of large arrays of low-cost inertial sensors to form cost efficient virtualized high performance sensors through local sensor fusion. Such sensor arrays will both offer new sensing modalities, as well as through redundancy enable the ability to self-calibrate during operation. The research challenges in this project relate both to classical questions applied to the new sensor structures, such as parameter identifiably – what can be measured and corrected for in a given array – and also to the development of computationally efficient optimization procedures for the high dimensional and typically non-convex optimization problems that arise in the parameter estimation step of the sensor fusion.

Localization and Monitoring of Vehicles supported by Inertial Sensors

Martin Lindfors, LiU
Supervisors: Rickard Karlsson NIRA Dynamics, Gustaf Hendeby LiU, Fredrik Gustafsson LiU

Illustration of a spectrogram (time-varying frequency content)
computed from a vehicle mounted accelerometer,
compared to the estimated harmonics.

The goal is to develop fundamental nonlinear filtering and estimation methods with applications to speed estimation in wheel based vehicles.

The manufacturing industry always look for opportunities to replace costly and sensitive sensors with cheaper and more robust ones. The wheel speed sensor is one such sensor that is exposed to harsh environment and is a relatively costly part of vehicles. In some applications, a contact-less accelerometer can be used instead.

By analyzing the time-varying spectrum of the vibrations caused by the wheel, the wheel speed can be computed. Another potential application is to support IoT devices where speed information is needed when located on a vehicle. Here, wired solutions to the existing sensors can be excluded. Speed estimation in cellphones is one further application.  This project has developed algorithms tailored to this problem that extend current theory in nonlinear filtering.

Human-Robot Cohabitation

Kristin Nielsen, Epiroc
Supervisors: Gustaf Hendeby LiU, Fredrik Gustafsson LiU, Robert Lundh Epic

Illustration of the position dependent information content in a mine,
using two laser scanners mounted on a loader. Illustration: Epiroc.

This project aims to develop enabling technologies that allow for human-robot cohabitation in mines. That is, to facilitate safe and efficient interaction between humans, autonomous systems, and manual machinery.

The goal in human-robot cohabitation in mines is to facilitate safe and efficient interaction between humans, autonomous systems, and manually operated machines. Robust and accurate localization of each object is an enabler to achieve this goal, and the first phase of this project aims to develop algorithms onboard autonomous vehicles that localize the vehicle relative to the mine and other stationary and moving objects in the environment.

The use of autonomous vehicles in mines today requires that no manual operation is allowed in the same area.  Safety gates separate a production area so the autonomous vehicles can operate without any manual interference. This setup makes the automation very sensitive to external disturbances.  For example, an operator entering through a safety gate will potentially shut down the full production area. The first approach is to base the situation awareness on data from a laser scanner, using an adaptive map of the mine.

Learning To Time Update

Anton Kullberg
Supervisors: Gustaf Hendeby LiU, Isaac Skog LiU

The time update is a critical component in navigation and tracking filters. Usually, physical dynamical models based on Newton’s laws are used to predict the motion of the vehicle. In this project, we aim to use data-driven methods and machine learning to learn the dynamics of the vehicle.

The method automatically learns how cars drive in a three-way intersection using position measurements and a constant velocity model combined with a Gaussian Process model of maneuvers, resulting in improved localization and fast maneuver classification.

In many cases, the motion patterns of objects are very repeatable, but beforehand not known. This might be boats drifting along streams or cars following traffic rules. Today, the standard methods simply assume no more knowledge of the motion than that physical limitations with respect to accelerations etc are fulfilled.  Obviously, better knowledge of the underlying motion patterns would be of huge benefit. Learning these motion patterns is a problem that combines theory from  classic estimation theory, as well as modern data driven machine learning methods. The main challenge lies in exploiting the synergies possible by combining the two approaches in one common framework.

Filtered Output Feedback for Rigid-body Systems

Marcus Greiff, Lund University
Supervisors: Anders Robertsson, Bo Bernhardsson, Zhiyiong Sun

This project seeks to establish trade-offs between different synthesis methods when designing tracking controllers and estimators for nonlinear systems. The focus is primarily on unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs), specifically quad-rotors. 

In non-linear systems theory, we are often concerned with one of two problems; the creation of a controller which guarantees some stability properties using full state information, or the creation of a filter which estimates this state based on partial information on the state from a set of measurements. However, when combining two such components, little can generally be said about global stability properties of the closed loop system, as the separation principle only applies to narrow classes of non-linear systems. 

In this project, we develop filtered output feedback controllers, attempting to co-design the estimator and controller such that stability properties can be guaranteed in the resulting closed loop system – from measurement to tracking/estimate errors. This guarantees a quantifiable measure of robustness to bounded disturbances, and enables safe and autonomous operation of the vehicles. These controllers will be cogs in several applications, such as the inventorying of supermarkets and the estimation of radiation intensity.

Massive MIMO-based Localization and Mapping

Xuhong Li, Lund University
Supervisors: Fredrik Tufvesson, Kalle Åström, Magnus Oskarsson
A paper from this project is available on Lund University’s website.

Using WiFi RTT in Indoor Localization

Martin Larsson
Supervisors: Fredrik Tufvesson