Meet the WASP Postdocs
Victor Morel’s fascination of privacy and data protection began during an Erasmus year at Uppsala University in Sweden. After completing his PhD in France in Protecting Privacy, he had the opportunity to return to Sweden for a Postdoc within the WASP NEST CyberSecIT at Chalmers University of Technology. Victor collaborates with Simone Fischer-Hübner, Professor in Privacy Security at Karlstad University, Visiting Professor at Chalmers, and part of WASP Faculty.
In the CyberSecIT project, Victor Morel and his team are conducting research on people’s preferences regarding privacy permissions. Their ultimate goal is to develop an AI system that can assist people in enhancing security and privacy in IoT environments. However, determining whether an automated privacy assistant genuinely benefits people’s best interest or merely shapes them into a predefined notion is a complex question.
What is your postdoc project about?
My postdoc is within a bigger project called CyberSecIT. It is about securing IoT (Internet of Things) applications. In the project, I design a system of usable privacy permissions so that all users become familiar with the permissions. When you install an app, you have some control over your permissions, but not so much when it comes to IoT applications. What we are trying to do is to elicit people’s preferences in these settings. If they would like to be in full control of it, or automated, for example. The end goal for this part of the project is to build an assistant, typically another application, that will help you manage your privacy decisions in the IoT application.
The background to this project is partly this new paradigm in IT. It is called Trigger Action Programming, and it means that anyone can program very small snippets of code. A typical example is IFTT, which stands for “If this, then that”. If I receive this e-mail, it triggers that action. It’s kind of a parenting of IT applications. IFTT creates new opportunities, but also new risks in terms of security and privacy. Typically, all the data that could possibly be collected is collected and processed, but we don’t really know in what purpose. Therefore, we need to assist people in making privacy decisions. But can we be sure that we are helping people towards their best interest? That is a very tough question to answer.
Tell us more about the NEST, CyberSecIT?
The project is led by Andrei Sabelfeld who is an expert in web security. In CyberSecIT we have different components – AI people, security people and privacy people. Some researchers in the NEST are more interested in the security aspect, how you can be sure that these IoT applications are actually doing what they are claiming. And how you can, for instance, train AI systems in a way that is decentralized and privacy preserving. If you bring all the data into the same place, well, then you have privacy risks. Also, if you spread the data, it makes it a bit challenging to actually make AI systems, but it’s something that research can work on. Then you have the best of both worlds, you have privacy and the benefits of AI automation.
What big questions, issues or ethical dilemmas do you face in our research field?
The big challenge in computer science is how we can bring two different sub disciplines together: lawfulness and usability. How can we create solutions that people actually use, and that are compliant with the law? A very interesting episode, I would say, is when computer sciences started to look at what is happening on the legal side. With the General Data Protection Regulation (GDPR) coming on, it fostered a lot of research and people realized that this is very interesting. Now we are also having a look at how we can combine it with the usability. Like human factors, basically. I think this is a very big challenge.
One upcoming issue is also large language models and chat bots. It is not something I am yet working on. However, it is growing very fast, it has a lot of benefits, but the way it has been released can also cause many issues. Did we ask for consent to train these models? It does not seem so actually. That is a bit of an issue. Maybe something I would be interested working on in the future.
Why is your research important?
Because privacy is something that is important in itself – everyone needs privacy. It is a human right, and it brings a lot of benefits just for us as humans. That is also why I became interested in this field in the first place, that usable privacy is something that we can bring to everyone.
What we usually do is that we develop the technology, but we tend to forget that this technology is meant to be used by humans. Humans are often the weakest link in these systems.
What do you miss in Sweden that you have in France, for example?
Sorry, I would say the food. I do like good food, and it is sometimes harder to get good raw material here in Sweden. Sweden does not have the same sun nor the same soil, so it is harder to grow good vegetables.
Published: August 18th, 2023