To protect autonomous systems from side-channel attacks
Our ability to trust autonomous systems relies on their security at all levels, including physical implementation. However, there is an increasing number of attacks exploiting low-level hardware vulnerabilities, so stronger methods to ensure tamper-resistance of physical implementations are required.
Presently, the most common type of attacks on physical implementations are side-channel attacks. This project is investigating the impact of side-channel attacks on autonomous system security, both destructively (by exploring new attack vectors) and constructively (by designing countermeasures).
Long term impact
The project is expected to deliver significant theoretical results for the research community as well as practical solutions beneficial for Swedish companies and civil authorities. The long-term impact will be made by securing implementations of autonomous systems from side-channel attacks, thereby contributing to the strategic need for enhanced cybersecurity in line with Sweden’s national cybersecurity strategy.
At present there are no common methods for assuring side-channel attack resistance of autonomous systems’ physical implementations. This project aims to address this important open problem. It will cover a wide range of approaches, including attacks on remote targets and Trusted Execution Environments (TEE).
The main challenge of the project is to create defence mechanisms against side-channel attacks that can withstand attacks as the adversary’s capabilities grow. By considering mathematical and electrical engineering problems together, the project is expected to create a foundation for the next generation of physical security-aware autonomous systems, in which side-channel resistance is incorporated at the design stage rather than patched in later, when a vulnerability is discovered.
The team of PIs consists of four renowned researchers with complementary skills:
Elena Dubrova, professor at KTH Royal Institute of Technology
Thomas Johansson, professor at Lund University
Christian Gehrmann, professor at Lund University
Carl-Mikael Zetterling, professor at KTH Royal Institute of Technology
The team brings together top experts in hardware security, cryptography, software and system security, and integrated circuit design. The KTH group was one of the first to start investigating the potential of deep learning in the context of power and EM-based side-channel analysis. The Lund University group has been active in timing attacks on NIST post-quantum cryptography standardisation process candidates.
There will be collaboration with national security centers such as Cybercampus, Cybernode, Swedsoft, ELLIT, CDIS, etc. The project will complement their work by addressing problems related to security of physical implementations. There will also be collaboration with international universities (UC Berkeley, Stanford, CMU, MIT, etc.) and companies (IBM, Intel, NXP, Xilinx, etc.).
Scientific presentation
Background
Autonomous systems are attractive targets for cyber-attacks. Therefore, autonomous systems require cryptographic protection to safeguard confidential data, ensure data integrity and availability, enable secure authentication and authorization, etc.
In recent years, two concerning trends have emerged. First, cyberattacks have shifted from targeting high-level software components to infiltrating lower levels of the computational hierarchy—the physical implementation of systems. The most common type of attacks on physical implementations are side-channel attacks. The second trend is profiled AI-based attacks. An attacker first trains an AI-based model on simulated attack scenarios and then uses it to break the real target with fewer computational resources.
Purpose
This project aims to secure autonomous systems from side-channel attacks.
Methods and approach
The project will investigate the use of automated tools and machine learning technologies for extracting secret information from physical implementations using leakage from various side-channels. The acquired knowledge will enable to develop new AI-based vulnerability assessment methods that use side-channel leakage information as input. The project will also design countermeasures against side-channel attacks, along with supporting tools.
The project will target systems that range from IoT systems with distributed constrained devices, to more powerful devices connected in mobile communication networks, as well as a Trusted Execution Environment (TEE). In these systems it is expected that the adversary will have physical access to devices to acquire side-channel information, or at least will be able to get timing information through a communication channel with a device. It is also assumed that there is some secret information that the adversary would like to learn.
Research program
The project is divided into four work packages:
- WP1: Side-Channel Attacks on Cryptographic Algorithms. Cryptographic protection is essential for autonomous systems. Side-channel attacks on cryptographic algorithms have been an active area of research for many years and significant progress has been made in understanding their possibilities and limitations. However, there are still open problems that need to be addressed. One open problem is re-evaluating the effectiveness of countermeasures to advanced side-channel attacks based on AI. Another open problem is advancing the knowledge in multi-channel attacks which exploit side channels from multiple sources (timing, power consumption, EM emissions, etc.) to maximize the extracted information.
- WP2: Side-Channel Resistant AI Model Implementations. This WP is dedicated to side-channel analysis of AI model implementations which will be an inseparable part of many future autonomous systems. Unlike cryptographic algorithms, artificial Neural Networks (NN) process data in a parallel, distributed way. They typically contain a significantly higher number of internal parameters and interactions. Determining which operations in a neural network leak information and which side channels specific to AI models are exploitable requires new analysis techniques. It is necessary to develop reliable metrics for measuring the leakage of sensitive information, such as the amount of information leaked per inference or per layer.
- WP3: Portable WebAssembly on Cofidential Containers. This WP addresses problems related to side-channel analysis of TEEs. TEEs such as Intel SGX or ARM TrustZone provide secure enclaves for executing security or privacy sensitive computations. Our specific objective is to enhance security of a portable WebAssembly (Wasm) running in a TEE. We plan to leverage TEEs’ hardware-based security feature to protect the code and data of the runtime and isolate it from the host environment. We will design and implement a hardware abstraction layer that allows the Wasm runtime to access the underlying hardware resources, such as memory and I/O devices, while enforcing strong isolation guarantees on confidential containers. The main focus will be on the Intel TDX2 and AMD SEV3 hardware platforms.
- WP4: Remote Side-Channel Attacks. In this WP we will design defences against remote side-channel attacks which can be carried out without a physical proximity to victim device. Traditional countermeasures may not be directly applicable to remote scenarios. Different types of remote attacks will be considered, including far field EM-based side-channel attacks and side-channel attacks that can be launched over a network connection.