PhD student position at the Department of Computing Science at Umeå University.
Because of privacy concerns around user data, and related legislation around handling user data, there has been an increased interest in using edge devices such as mobile phones, to process user data without storing them at a central location. This has also been a major concern in privacy-sensitive application areas of machine learning. There has therefore been much interest in federated learning, a class of optimization algorithms used mainly to train a global machine learning model at disparate heterogeneous sites without sharing data between the sites. It is possible to frame federated learning as an operator splitting problem, and within the operator splitting framework, it is possible to simultaneously solve many of the problems that arise when training large-scale machine learning models on data from multiple sites.
The specific goals of this project are: to formulate the federated learning problem as an instance of operator splitting, to develop numerical optimization algorithms for these formulations, to analyze the theoretical convergence guarantees of these algorithms, to develop and analyze novel loss and penalty functions for the federated learning problem, and to scale these developments for large-scale machine learning tasks.