Doctoral student position in Computer Science at the Department of Computer Science at Lund University.
Subject description
This project aims to explore software engineering methods and tools to promote trustworthy AI for cyber-physical systems (CPS). The research approach includes 1) empirical studies of industrial practice and related regulatory frameworks for AI systems, and 2) design of methods and tools to enable trustworthy AI by design. The project particularly focuses on quality assurance (QA) of machine learning (ML) systems in the context of MLOps, i.e. the combination of ML, DevOps, and Data Engineering in highly automated development pipelines.
From a QA perspective, ML constitutes a paradigm shift compared to conventional software, both in its uncertainties (probabilistic rather than deterministic outcomes) and its dual dependency on data and source code. Thus, the MLOps pipelines must be composed of engineering tools that support QA accordingly. Further, we want to transfer MLOps concepts into CPS. i.e. digitally controlled physical systems, such as autonomous vehicles and robots.