Umeå University, the Department of Computing Science, is seeking outstanding candidates for a PhD student position in Computer Science with focus on trustworthy learning for anomaly detection in complex data.
Classical machine learning algorithms are more trustworthy than deep learning because they are less complex and less opaque. On the other hand, they have large disadvantages. The use of machine learning for defense of computer systems against growing security and privacy attacks accelerate challenges to ensure accurate, reliable and robust models. Because security and privacy of learning models are largely ignored, though they are key for safety and security critical application domains such as healthcare, automotive and robotics, industry 4.0, and cyber-physical systems. Such attacks can manipulate, evade, fool, misled the learning models or systems at any levels, e.g., data, model, and inference. As a result, current detection and defense models lead to catastrophic performance, loose user’s privacy and trust, and also incurred a substantial financial loss for cloud service providers. Hence, the proposed models for detection, defense, and root-cause analysis of anomalies need to be more robust and resilient to both security and privacy attacks.
The aim of this project is primarily to develop trustworthy learning methods for anomaly detection, defense, and root-cause analysis in complex data (e.g., heterogenous, multi-source, data with extreme missing values) to increase model robustness, adaptability, resilience, and transparency. We propose to design and implement trustworthy machine learning algorithms for anomaly detection, defense and root-cause analysis under adversarial settings. These algorithms rigorously investigate the input, model and output leveraging (a) geometric and statistical distribution of data, (b) adversarial features with significant amount of attack variation, (c) internal behavior analysis of models, (d) model-agnostic vulnerability analysis, (e) security and privacy-aware design of models to address the evolving adversarial attacks. These features improve the performance, scalability, robustness and transparency of the data, models and inference. They will also have great potential for application to edge clouds, Internet of Things (IoT), healthcare, and industry 4.0 under adverse conditions.