AI for Attack Identification, Response, and Recovery - Where Software and Communications Meet in the Next Decade: Cybersecurity Attacks and Defences
6G communication systems promises unprecedented data access speeds and cloud-based services. These systems rely on software, virtualization, and open interfaces to enable seamless operation across services and vendors. However, challenges arise in terms of resource availability, cybersecurity, and system complexity. This project explores the AI-based components and root-cause analysis techniques to address these issues. Many questions remain unanswered, particularly regarding security and autonomous management in a rapidly evolving landscape.
The project aims to bring clarity to the following questions:
- Will the obtained machine learning (ML) performance results generalize to other data and scenarios?
- How will the uncertainties from state sensing components and autonomous mitigation actions influence overall performance?
- Will the presence of attackers in a scenario be distinguishable from benign (but unforeseen) anomalies? If so, is detecting 90% of attacks good enough?
- Will future mitigation actions other than those suggested in the existing works, basically autoscaling or more capable compute platforms, be useful also in hostile scenarios?
- Considering the emerging cybersecurity and AI legislation in Europe, will fully autonomous management be viable in infrastructures, removing the need for the research community to study the human-in-the-loop reactions to unfolding events?
Long term impact
Automation, cloud-edge continuum, and security is critical for Swedish stakeholders, including Ericsson, Scania, Saab, Epiroc and security/public safety agencies. The competence built within the project will strengthen the Swedish capability to retain skills in this vital area.
The project combines effort from research teams in different areas to understand how to build systems with high autonomy, high resource efficiency and operated in a hostile environment. They recognize that (some) systems with high autonomy may be held responsible for the performed actions, and therefore human oversight might be mandated, where explainability and causal reasoning capabilities will be needed.
While existing work in AI@Edge has made progress, achieving higher precision and recall levels (beyond an F1 score of 90%) is crucial for effective cybersecurity. Additionally, rough characterizations of situations may not suffice for hostile scenarios. To keep pace with fast-learning attackers, defenders must leverage AI for rapid and accurate responses. Balancing finite resources is essential, as certain reactions could inadvertently trigger denial-of-service attacks.
The project will significantly advance the understanding of how to proactively plan strategies and how to deploy the right combination of known blueprints (game-based strategies) and cause-effect explanations for reactions at run-time.
The project brings together a team of researchers from the areas of networking, cloud computing, dependability/security, and AI planning to contribute to fundamental research in these dimensions. To provide the young researchers the means to test their novel ideas in a future communication setting, the PIs have discussed the project with the research leaders and stakeholders at industry as well as TietoEVRY as supporting organisation. We envisage adding new partners for collaboration as the landscape of 6G unfolds during the life of the project.
Scientific presentation
Background
Adding machine learning components to solve resource optimisation and security challenges comes as a double-edged sword. Nobody yet fully understands how the ML components themselves can be exploited by adversaries, and how brittle the defence mechanisms that we rely on can be. This project combines efforts from research teams in different areas to understand the added bits of the puzzle necessary for building systems with high autonomy, high resource efficiency and operating in a hostile environment.
The project group recognises that (some) systems with high autonomy may be held responsible for the performed actions, and therefore human oversight might be mandated, where explainability and causal reasoning capabilities will be needed. There are signs of industry expressly emphasising the need for human oversight.
Purpose
This project will aim to understand the fundamental benefits and limitations of closed loop automation when adopting ML for resource efficiency. Their long-term vision is that societal reliance on critical infrastructures needs to bring the benefits of AI/ML components but manage the overall visibility of security controls in emerging scenarios.
Methods and approach
The project focuses on building software-intensive high-performing communication infrastructures with the following three main objectives:
- prevention of cyberthreats by anticipating and mitigating them,
- accurate detection of ongoing attacks with well-understood basic components and AI methods, and (partially) autonomous and explainable reaction to attacks in presence of resource trade-offs and complex potential changes over time (concept drift).
AIR2 will significantly expand the scientific knowledge on high-autonomy intrusion response approaches for softwarised networks with high-autonomy network management functions.
The project will be organised in five work packages as follows:
WP1: Learning strategies for anticipating attacks and devising countermeasures that are effective in real systems, through combining reinforcement learning and security games.
WP2: Learning interpretable network models, using them for identifying attacks, and for planning reactions against these attacks.
WP3: Providing validity arguments by linking decisions and validated actions with explanations within ML-based systems that account for adverse scenarios.
WP4: Devising scenarios where benign load divergences can be distinctly studied in comparison with malicious denial-of-service attacks, whereby creating effective approaches for both.
WP5: Solicitation of use cases, and creation of domain-specific simulation, emulation, as well as integration in a testbed.
NEST Environments
The four applicants for the project altogether supervise or co-supervise 28 PhD students and postdocs in their groups and contribute to 26 projects nationally or internationally.