Mauro Conti, a cybersecurity and data privacy expert, has been appointed Wallenberg Guest Professor and Chair in Cybersecurity. His role will involve establishing a group specialising in cybersecurity in Sweden.
In spring 2023, WASP announced a 200 million SEK investment in cybersecurity. Part of the investment has been earmarked for recruitment. Now, Mauro Conti has been engaged as Wallenberg Guest Professor and Chair in Cybersecurity, and will collaborate part-time at Örebro University for five years. He is a professor of computer science at the University of Padua, where he specialises in cybersecurity and data integrity.
“My goal is to help establish a cybersecurity group and promote the internationalisation of Sweden in this area,”says Mauro Conti.
AI enhances security but can be exploited
His research focuses on cybersecurity in mobile phones, blockchain, connected home appliances, and smart cities. Artificial intelligence is a recurring theme in his studies.
“AI can enhance the security of applications and other digital environments, but it can also be exploited to execute attacks. Furthermore, the cybersecurity of AI itself, is a big concern For example by influencing AI’s learning process, one can impact its decisions,” says Mauro Conti.
“As it is challenging for humans to grasp how the system functions, they are easily manipulated.”
Backdoors – one way of manipulating systems
For example, imagine a hospital that uses deep learning techniques for automated cancer diagnosis. The model is trained on a dataset of scans of biopsy images, labelled as benign or malignant, to analyze new histopathological images. A malicious attacker might seek to tamper with the model and make it misclassify scans of malignant biopsies as benign.
This could be done by poisoning the model with a tiny, imperceptible modification into the cell texture of the benign scans, during the training phase. If this tiny modification is introduced in enough samples, the model learns to associate it with a benign scan. Later, if the same modification is present in the scan of a malignant biopsy, the model will wrongfully classify the scan as a benign one.
“It is a so-called backdoor that can be introduced into the system. Similarly, you can manipulate systems in the physical world. Imagine that a piece of paper of a special colour is placed at a stop sign, tricking self-driving cars into thinking it’s a speed limit – this could have serious consequences,” explains Mauro Conti. The automotive industry exemplifies intriguing future research as it integrates both physical and digital components to manufacture a car.
Different attacks calls for different mitigation techniques
In fact, there are different strategies that can be applied to strike cyberattacks on AI systems. Some techniques, named dirty label attacks, leverage the introduction of a small percentage of training data with wrong labels. Other attacks, such as the one in the previous example of cancer diagnosis, leverage the introduction of hidden patterns that steer models’ decision process; these are called clean label attacks.
Other attacks do not tamper with the training data, but directly manipulate the model by changing neurons’ weights, while some attacks (the adversarial attacks) are even capable of inducing a model to fail by manipulating only the input data. These categories of attack exemplify the range of attacks that could be struck against AI systems, as well as the complexity of defining mitigation techniques and strategies.
The sound of keystrokes
In previous studies, he researched how to prevent man-in-the-middle attacks, in which the attacker secretly positions themselves between two parties and may even alter communication between them. An earlier study also caught the attention of Forbes magazine, where the magazine’s reporter took part in an experiment in which Conti and his colleagues successfully deduced a password that the reporter entered during a video call.
Mauro Conti describes: “Since every key on a laptop emits a different sound, it’s quite easy to determine what they are typing if the keystrokes are audible during the video call.”
Mauro Conti’s recruitment is funded by WASP (Wallenberg AI, Autonomous Systems and Software Program). In addition to a five years guest professorship, WASP covers recruitment of one assistant or associate professor and two PhD students and Örebro University adds three postdocs and one PhD student.
Translation: Jerry Gray, Örebro University
Published: April 14th, 2025