Chalmers seeks a 2-year postdoc in formal analysis of AI-based systems within the Department of Computer science and Engineering.
Project description
Are you passionate about advancing the safety and reliability of learning-enabled systems? Join the Group for Safe and Trustworthy Autonomous Reasoning (STAR) to lead cutting-edge research in the design and analysis of learning-enabled systems. At STAR, we tackle key questions like:
– How can we build systems that are both intelligent and inherently trustworthy?
– What methods ensure reliability and transparency in AI-driven decision-making?
We are looking for candidates interested in conducting research in one or more of the following areas:
– Runtime verification under uncertainty
– Statistical learning and safe reininforcement learning
– Simulation-based analysis
– Explainability methods for AI/ML-based systems
– Specification formalisms for learning-enabled systems
We encourage candidates to also propose new directions aligned with their and our research (https://starlab.systems). This position is funded by the prestigious WASP program, and as such comes exciting career development opportunities. Read more here.
About the STAR Lab
Led by Dr. Hazem Torfah, STAR is part of the Computing Science Division within the Department of Computer Science and Engineering (CSE). Our interdisciplinary group focuses on developing theoretical foundations and tools for creating safe, reliable, and secure autonomous systems, with application areas including autonomous driving and aviation. Collaboration and support are central to our dynamic research environment.
The CSE department, a joint effort between Chalmers University of Technology and the University of Gothenburg, is home to a diverse, international community of 270 researchers from over 30 countries.