Umeå University, the Department of Computing Science, is seeking outstanding candidates for a PhD student position in Computer Science with focus on trustworthy learning for anomaly detection in complex data.

Project Description

Classical machine learning algorithms are more trustworthy than deep learning because they are less complex and less opaque. On the other hand, they have large disadvantages. The use of machine learning for defense of computer systems against growing security and privacy attacks accelerate challenges to ensure accurate, reliable and robust models. Because security and privacy of learning models are largely ignored, though they are key for safety and security critical application domains such as healthcare, automotive and robotics, industry 4.0, and cyber-physical systems. Such attacks can manipulate, evade, fool, misled the learning models or systems at any levels, e.g., data, model, and inference. As a result, current detection and defense models lead to catastrophic performance, loose user’s privacy and trust, and also incurred a substantial financial loss for cloud service providers. Hence, the proposed models for detection, defense, and root-cause analysis of anomalies need to be more robust and resilient to both security and privacy attacks.

The aim of this project is primarily to develop trustworthy learning methods for anomaly detection, defense, and root-cause analysis in complex data (e.g., heterogenous, multi-source, data with extreme missing values) to increase model robustness, adaptability, resilience, and transparency. We propose to design and implement trustworthy machine learning algorithms for anomaly detection, defense and root-cause analysis under adversarial settings. These algorithms rigorously investigate the input, model and output leveraging (a) geometric and statistical distribution of data, (b) adversarial features with significant amount of attack variation, (c) internal behavior analysis of models, (d) model-agnostic vulnerability analysis, (e) security and privacy-aware design of models to address the evolving adversarial attacks. These features improve the performance, scalability, robustness and transparency of the data, models and inference. They will also have great potential for application to edge clouds, Internet of Things (IoT), healthcare, and industry 4.0 under adverse conditions.

More Information and Application

View all positions
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. View more
Cookies settings
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active
The WASP website uses cookies. Cookies are small text files that are stored on a visitor’s computer and can be used to follow the visitor’s actions on the website. There are two types of cookie:
  • permanent cookies, which remain on a visitor’s computer for a certain, pre-determined duration,
  • session cookies, which are stored temporarily in the computer memory during the period under which a visitor views the website. Session cookies disappear when the visitor closes the web browser.
Permanent cookies are used to store any personal settings that are used. If you do not want cookies to be used, you can switch them off in the security settings of the web browser. It is also possible to set the security of the web browser such that the computer asks you each time a website wants to store a cookie on your computer. The web browser can also delete previously stored cookies: the help function for the web browser contains more information about this. The Swedish Post and Telecom Authority is the supervisory authority in this field. It provides further information about cookies on its website,
Save settings
Cookies settings