The Department of Computing Science, Umeå University, is seeking outstanding candidates for a PhD student position in Computer Science with focus on trustworthy learning for anomaly detection.
Project description
Classical machine learning algorithms are more trustworthy than deep learning because they are less complex and less opaque. On the other hand, they have large disadvantages. The use of machine learning for defense of computer systems against growing security and privacy attacks accelerate challenges to ensure accurate and robust models. Security and privacy of learning models are largely ignored, though they are key for safety and security critical application domains such as healthcare, automotive and robotics, industry 4.0, and cyber-physical systems. Such attacks can manipulate, evade, fool, misled the learning models or systems at any levels, e.g., data, model, and output. As a result, current detection and defense models lead to catastrophic performance, loss of user’s privacy and trust, and may also incur a substantial financial loss for cloud service providers. Hence, the proposed models for detection, defense, and root-cause analysis of anomalies need to be more robust and resilient to both security and privacy attacks.
The aim of this project is primarily to develop trustworthy learning methods for anomaly detection, defense, and root-cause analysis to increase model robustness, adaptability, resilience, and transparency. We propose to design and implement trustworthy machine learning algorithms for anomaly detection, defense and root-cause analysis under adversarial settings. These algorithms rigorously investigate the input, model and output leveraging (a) geometric and statistical distribution of data, (b) adversarial features with significant amount of attack variation, (c) internal behavior analysis of models, (d) model-agnostic vulnerability analysis, (e) security and privacy-aware design of models to address the evolving adversarial attacks. These features improve the performance, scalability, robustness and transparency of the data, models and inference. They will also have great potential for application to edge clouds, Internet of Things (IoT), healthcare, and industry 4.0 under adverse conditions.
The position is aimed for graduate studies in Computing Science within the autonomous distributed systems lab, but collaboration with researchers in, e.g., machine learning, mathematical statistics, optimization, trustworthy learning, deep learning or artificial intelligence is expected. (For further information, see www.cloudresearch.org).