Linköping University is seeking a PhD student in Trustworthy Machine Learning at the Software and Systems Division within the Department of Computer and Information Science.
Position Description
Trustworthy machine learning is an umbrella term that provides methods and tools to ensure that AI and ML systems are verifiable, robust, secure, privacy-preserving, and ethical, which leads to greater adoption and positive societal impact. In safety-critical domains such as healthcare or autonomous driving, as well as in security-critical systems including AI-native networks or financial services, AI/ML that is not secure, robust, verifiable, or privacy-preserving can lead to safety risks, regulatory violations, and significant reputational damage. By making AI trustworthy, we facilitate large-scale and reliable use of AI across different industries.
You will work at the intersection of machine learning, cybersecurity, and privacy, developing methods to make AI systems trustworthy, verifiable, and robust against adversarial manipulations. You will have the opportunity to contribute within one or more of the following research directions:
- Verifiable training and trustworthy AI pipelines.
- Tools for robust data and model provenance in adversarial environments.
- Methods for protecting training data and end users, including secure data removal and machine unlearning.
- Machine unlearning strategies that balance the tradeoff between privacy and utility in continual, federated, and distributed settings.