Top 10 Notorious Research Papers on Federated Learning

Top 10 Notorious Research Papers on Federated Learning

The term Federated learning was coined a couple of years back and is nothing but a way to train artificial Intelligence (AI) models without there being the necessity of anyone seeing or touching your data. All this paves a way to unlock information to feed new AI applications. This decentralized form of machine learning has gained immense popularity in no time. On that note, let us have a look at the top 10 notorious research papers on Federated Learning.

Generative Models for Effective ML on Private, Decentralised Datasets

The main focus of this research paper is to establish that generative models trained using federated methods and formal differential guarantees can effectively debug many commonly occurring data issues. All this is even when the data is not directly inspected. Well, that's not all. The researchers also explore these methods in applications to text with differentially private federated RNNs, and images using a novel algorithm for differentially private federated GANs.

Moshpit SGD: Communication-Efficient Decentralised Training on Heterogeneous Unreliable Devices

This research paper is put forth by the researchers from Yandex, University of Toronto, Moscow Institute of Physics and Technology, and National Research University of Higher School of Economics proposed Moshpit All-Reduce. Wondering what the research paper is all about? Well, here the researchers demonstrated the efficiency of their protocol for distributed optimization with strong theoretical guarantees, along with experiments that show impressive results.

Central Server Free Federated Learning over Single-sided Trust Social Networks

The researchers from  WeBank, Kwai, University of Southern California, University of Michigan, and the University of Rochester have come up with this research paper wherein they proposed a central server-free federated learning algorithm called Online Push-Sum (OPS) method to handle various challenges in a generic setting. The researchers have also provided a rigorous regret which shows interesting results on how users can benefit from communication with trusted users in the federated learning environment.

Federated Learning for Mobile Keyboard Prediction

'Federated Learning for Mobile Keyboard Prediction' is a research paper brought up by Google researchers where they demonstrate the feasibility and benefits of training language models on client devices without exporting sensitive user data to servers. A point to note is that the federated learning environment gives users greater control over the use of their data and also simplifies incorporating privacy by default with distributed training and aggregation across a population of client devices.

Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge

Federated learning has been reformulated as a group of knowledge transfer training algorithms called FedGKT by researchers from the University of Southern California. The researchers aim at designing an alternating minimization approach to train small CNNs on edge nodes and periodically transfer their knowledge by knowledge distillation to a large server-side CNN. Now, there are numerous advantages of this – reduce demand for edge computation, and lower communication bandwidth for large CNNs, among others.

FedML: A Research Library and Benchmark for Federated Machine Learning

To facilitate federated learning algorithm development and fair performance comparison, researchers from Tencent and top universities introduced FedML, an open research library, and benchmark. Also, the researchers believe that their library and benchmarking framework provide an efficient and reproducible means for developing and evaluating federated learning algorithms.

Learning Private Neural Language Modeling with Attentive Aggregation

An attentive aggregation method has been proposed by researchers from Monash University, the University of Queensland, and the University of Technology Sydney. They proposed a model aggregation with an attention mechanism considering the contribution of client models to the global model, together with an optimization technique during server aggregation, to solve the problems for mobile keyboard suggestions.

Label Leakage and Protection in Two-party Split Learning

Researchers from ByteDance and Carnegie Mellon came together for a research paper titled – 'Label Leakage and Protection in Two-party Split Learning'. The model is such that it uses the norm of the communicated gradients between the parties to reveal the participants' ground-truth labels. Here, the researchers also discuss several protection techniques to mitigate this issue.

Flower: A Friendly Federated Learning Research Framework

Researchers from the University College London, University of Cambridge, and Avignon Universite presented Flower, an open-source framework that supports heterogeneous environments, including mobile and edge devices, and scales to many distributed clients. With this in place, the engineers can port existing workloads with little overhead regardless of the ML framework used.

Advances and Open Problems in Federated Learning

This is a research paper by Google, in collaboration with researchers from top universities. They have come up with a broad paper surveying the many open challenges in the area of federated learning. No wonder why the paper makes it to the list of top 10 notorious research papers on Federated Learning.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net