WASP-funded postdoc position at the Department of Computing Science at Umeå University.
Project description and working tasks
The rapid development of autonomous systems, connected devices, and distributed applications poses several challenges in dealing with petabytes of data in diverse resource-constrained environments. Federated machine learning (FML) is a collaborative learning solution to handle these problems without sharing data with centralised servers. However, several emerging threats target FML training, learning, and inference to fail or mislead models at early learning rounds, particularly backdoor and bitflip attack and defence strategies under-explored in FML. These results jeopardize achieving trustworthy performance for any downstream tasks. Therefore, this project envisions developing and validating attack and defence strategies in federated learning for limited and diverse non-iid (independent identically distributed) data under non-standard and adversarial settings, which are ideally suited for edge AI infrastructures. These goals can be achieved by inducing unique features in federated learning algorithms such as robust training, model restoration, trustworthy device selection, secure learning and inference, fault-tolerance against failures and attacks, as well as resilient, fair and robust models. The ambition is to validate them in classical non-standard settings and apply them to solutions for constraint environments (e.g., the Internet of Things (IoT) and robotic arms). Potentially, teaching up to a maximum of 20% can be included in the work tasks.