PhD student position at the Department of Computer Science at Lund University.
Description of the workplace
The position is in the Software Engineering Research Group (SERG), known for conducting relevant and rigorous research on the large-scale engineering of software systems, having significant impact on industry and society. We particularly use empirical research methodology for long-term industry-academia collaboration.
Subject description
In this project, we want to examine the long-term risks of developing powerful and intelligent autonomous systems and how we can support AI alignment through continuous operational testing. The project has three main research focuses:
- How to continuously develop and update the value and constraint models that form the basis for testing. We will follow ongoing standardization and legislation processes and also go beyond them to explore new ways of addressing implicit human values and unknown future consequences of AI through testing.
- How to continuously identify and schedule effective and feasible tests based on the above models. We are looking at operational testing in both simulated and real-world environments.
- 3How to design interactive support for continuously reasoning about AI alignment based on collected test data. The collected data needs to be translated into automated decisions or information that is useful for a human to assess the quality of the testing and the system.
The project takes an engineering perspective on the above issues and will be conducted in collaboration with industry.