Vision of the cluster
We are, currently seeing a rapid increase in both the amounts and types of data available, through sensors, from the internet, and from databases, etc., which holds valuable information describing complex situations where active decision making by human experts is necessary. Increasingly sophisticated autonomous systems are now collecting, analyzing, and presenting the data in aggregated forms and there is a need to be able to seamlessly use simulation to explore different future scenarios in an integrated decision support environment, which lifts the human decision making to a higher level.
Our vision is the development of the next generation decision support systems, so called cognitive companions, which adaptively reduce the cognitive load caused by large and rapid information flows while ensuring application dependent mission critical decision time-scales.
We have identified three different but closely related research topics that will address the challenges involved in the exciting development of the theoretical and practical foundation for cognitive companions.
Visualization – Generated data needs to be aggregated into forms that can be presented using visual metaphors and information rates fit to human perception and thus enable effective decision support. An exciting novel development is the notion of embodiment of the autonomous systems as an avatar, augmenting traditional data representations and visualizations. In training, learning, or personalized health care, the avatar could be manifested as a photo-real digital human. In other applications, such as big data analysis or UAV-fleet management, the avatar, or cognitive companion, represents visualizations of multi-source aggregated data. This poses research challenges in area such as:
- Development of machine learning approaches to data reduction in visualization pipelines
- Perceptual metrics for assessment of visualization, interaction, and information transfer.
- Ontology based data dependent, adaptive, autonomous, and suggestive visualization.
Interaction – From a cognitive science perspective, ensuring users to benefit from autonomous learning environments represents a shift in paradigm from human-system communication to humans-in-the-loop applications. Configuration, representation, and effective display of rapid information flows at multiple decision time-scales represent major challenges, breaking with traditional thinking in linear user interaction. Prioritized research topics leading to novel understanding of human cognition, interactive solutions, and collaboration will include:
- Interaction with collaborative multi-agent autonomous systems at variable levels of autonomy
- Seamless integration and effective use of tangible user interfaces, interactive surfaces/walls, projection on proximate surfaces and wearable interfaces
- Support of non-intrusive non-conventional input forms using intuitive input streams such as body posture- and gesture, hand- and finger movement, and eye tracking
- Eyes-free control using multi-channel haptic on-body wearable tools
Communication – Closely coordinated and long term interaction between autonomous systems and humans will be key in future distributed and mobile autonomous systems. New software architectures that can form dynamic application-defined networks with transparent use of underlying highly heterogeneous and changing physical networks are needed. These architectures further need to support high-level context-aware composition of autonomous services, evolvability of the software, and flexible security, supporting secure communication in the presence of dynamically changing networks and components. Research topics include:
- Formal specifications and techniques for symbiotic human-robot interaction processes in terms of delegation, contracts, speech acts, and other protocols
- Formal models for shared tasks used in associated with joint planning and decision making
- Techniques for creating, storing, merging, updating and accessing, distributed data models
- Efficient application-defined networks that adaptively select physical communication paths
- Safe evolvability, supporting hot updates of services as well as of the middleware itself
- Secure communication in the presence of dynamic paths and transient connectivity
Advanced decision support environments, in stationary as well as mobile setups, for human interaction and communication are key enabling components in the implementation of increasingly autonomous systems for industrial and societal use. It is widely recognized that mission critical decision relies critically on further development of visualization, interaction and communication. The research in this cluster addresses some of the underpinning technologies that will take decision support into the autonomy era by leveraging human and machine intelligence in practical applications of highest possible industrial value. We see direct use cases of the technologies developed within this cluster in areas such as safety-critical industrial process surveillance, automotive applications, interaction with and visualization of big data from e.g. large scale simulations or high bandwidth sensor networks, and advanced diagnostic support in the medical domain. The integration of eye tracking, gestures, and natural languages into natural user interfaces for autonomous systems is an emerging industrial area by itself. Likewise are technologies and applications towards new display modalities for augmented reality, screenscape visualization and glasses free 3D display key components in the development of human interaction with autonomous systems.
SP1 – Image synthesis and sensor simulation for machine learning PhD student: Apostoilia Tsirikoglou, Advisors: Jonas Unger, Anders Ynnerman: This sub-project will contribute to the goal of the project by building the underlying theory and software platforms for using synthetic data generated using image synthesis for training machine learning algorithms. A strong focus is put on deep learning. The project investigates what are the requirements in terms of accuracy in the light transport simulation, which trade-offs can be done, and how training strategies and architectures can be designed to improve the performance of deep neural nets using synthetically generated training data.
SP2 – Rich and Natural Interaction with High Dimensional Data PhD student: Tomasz Kosiński, Advisors: Morten Fjeld, Marco Fratarcangeli: This sub-project takes a Human-Computer Interaction (HCI) approach on agents for sensor-rich environments. Enhancing user capabilities through available, cutting-edge hardware and software solutions is a priority along with the transparency and accessibility of the methods employed. An emphasis is put on giving the user a high degree of control on data involved. A further aim is to investigate how trust can be improved, and also, how intuitive interaction can be achieved.
SP3 – Interaction with autonomous agents for medical decision support PhD student: Martin Svenson: Advisors: Anders Ynnerman, Jonas Löwgren, Claes Lundström: This project deals with development of novel methods for bi-directional interaction between humans and systems of autonomous agents, enabling reciprocal and continuous learning, with focus on medical diagnostic applications. The goal is to support a unique diagnostic approach for each patient, support for multiple concurrent analyses operating on separate information sources, and providing large amounts of annotated ground truth data to feed machine learning algorithms.
SP4 – Digital Cognitive Companion for Marine Vessels PhD student: Mårten Lager (Saab Kockums): Advisors: Jacek Malec, Elin Anna Topp, Roger Berg: The current trend is that naval ships are being equipped with increasing number of increasingly complex sensor systems, integrated onboard or on autonomous vehicles, forming a multi-agent system (MAS). The main goal of this project is to develop a solution where operators can effectively interact with the system on a more abstract level, where new advanced fusion algorithms replace some of the work that operators need to perform in present solutions. Instead of exchanging abstracted data coming from multiple sensor systems managing it locally, a higher-level human-in-the-loop decision-making will be used. The operators will make decisions based on aggregated data, which come from harvested information through the data fusion performed on the total data set.