Resources

The Pros and Cons of Federated Learning

Ilya Razenshteyn
June 13, 2023

Federated Learning (FL) is a technique for training stochastic-gradient-based Machine Learning models (primarily neural networks) in a distributed way without transferring raw training data.

No items found.
No items found.

Summary

It is primarily focused on data from user devices, but can also be used for server-to-server training. The archetypal use-case for FL is training a model on data from smartphones of many users (e.g. Gboard autocomplete).

Examples of what Federated Learning can do:

Examples of what Federated Learning cannot do:

  • Train a decision tree model to predict credit card fraud (not stochastic-gradient-based);
  • Join two tables over a column (not machine learning).

The strong side of federated learning is performance: it doesn't incur a large computation overhead compared to training a model in a conventional way. The only significant overhead is networking: user devices must send a significant amount of information to servers.

However, the weak side of federated learning is privacy. While it is positioned as a privacy-enhancing technology claiming to protect the user data, it doesn't provide any theoretical guarantees. Moreover, in practice, there are reconstruction attacks available for all deployed Federated Learning-based systems, allowing the attacker to recover some (or most) of the training examples. In some cases, these systems provide even worse privacy compared to conventional training, leaking the training data not only to the centralized server, but also to other users!

These issues are known to the research community, and there is active work on their mitigation (there are some promising directions, e.g. differentially private model updates). However,to the best of our knowledge, currently there are no Federated Learning systems which are both practical and privacy-preserving.

An alternative approach for privacy-preserving machine learning is to run computations on encrypted data without decrypting it via Secure Multiparty Computation (SMPC) or Homomorphic Encryption (HE). The below table compares the two options.

Technology

Scope

Performance

Privacy of the user data

FL

Only gradient ML

Small overhead

Very high risks

SMPC/HE

Most real-life computations

Constant factor overhead

Provably private

What is Federated Learning?

Federated Learning modifies the training process of the machine learning model, so first we provide some background on this process. Let's consider a model trained by Stochastic Gradient Descent (typically a neural network). The training process consists of repeating the following steps:

  1. Take a random set of training examples (e.g. texts typed by the user);
  2. For every example, compute the gradient of the current model at this example, like in most SGD-based training methods. The particular way it is computed doesn't matter for our story: it is just a bunch of numbers which are somehow computed based on the current model and the training example;
  3. Aggregate the gradients according to some rule.

In its simplest form, Federated Learning exploits a rather natural idea: why do the step (2) on the central server, if it can be done by the clients? Then clients need to send only the gradients to the server, and the server needs to send them back the updated model.

The hope is, gradients contain only the information necessary for updating the model, and the private details from user-provided examples are not leaked. Of course, nothing is formally proven (i.e. there are no guarantees), but the apparent complexity of neural nets gives some hope that extracting private information from the gradient is going to be difficult. Moreover, we can pre-combine the gradients across all of our local examples and send them as a single batch, which further complicates the potential attack.

One might be suspicious about such reasoning. For example, we had several kilobytes of texts, and now we transformed them somehow and are sending hundreds of megabytes of derived data over the network. How do we know that these texts cannot be recovered from this massive amount of derived data? It turns out that these suspicions are warranted: there are relatively simple reconstruction algorithms (some examples) for commonly used models for the federated learning scheme described above.

As it stands now, there is an arms race in the research community. Some researchers propose mitigations, while others find ways to reconstruct the data for the mitigated protocols. It doesn't appear to be resolved any time soon, not until there is a federated learning scheme which provides some provable privacy guarantees.

Luckily, there is also some promising work in this direction: researchers have explored applying differential privacy techniques to the gradients which are sent from clients to the server. This would resolve the privacy issues in a principled way, but unfortunately, existing differential privacy approaches degrade the model training too much, and more work is needed to make these techniques practical.

Attacks and mitigations

As soon as the first (and perhaps the most) prominent Federated Learning scheme was proposed (FedAvg), researchers quickly found numerous ways to attack it (some examples: 1, 2, 3, 4). However, this wasn’t the end: the arms race between attacks and mitigations started, with attackers getting much more creative, and not only extracting private data, but also "poisoning" the model, making it behave in the desired way without other parties noticing.

A great survey of attacks (and ready-to-use code for attacking) can be found in this github repo.

Data leakage

This is the most obvious, and historically first angle of attack. The question it is aiming to answer is - how private is Federated Learning actually? I.e., can we reconstruct the training examples from the information which is sent to the server by clients? It very quickly turned out that the answer is definitive "yes". Below, we discuss some attacks, and show reconstructions for image models as examples (other types of models can be attacked in a similar way, but the results are harder to visualize).

The simplest possible scenario is the following: all parties (clients and server) are honest and trying to do their best to avoid leaking the data. Does the data leak in this case? It turns out that even simple techniques allow us to reconstruct examples pretty well.

Source: https://arxiv.org/pdf/2110.13057.pdf

Increasing batch size (how many gradients we aggregate on the client side) somewhat helps, but the training examples can still be reconstructed reliably, as long as it is not gigantic (e.g. thousands of examples, which is not practical if the gradients are aggregated on a device of a single user). Federated learning literature also suggested aggregating gradients using a more reliable privacy technology like SMPC or Homomorphic Encryption (this would significantly increase the effective batch size), but real-world deployments still don't use it.

The next question is, what if the server is malicious (either intentionally, or by accident), and makes some effort towards reconstructing training examples? Apparently, the server might introduce some very slight modifications to the model, making perfect reconstructions possible even with very large batch size.

Source: https://arxiv.org/pdf/1906.08935.pdf

This makes federated learning not maliciously secure pretty much by design (malicious server => all data is leaked). A more realistic scenario is a relatively trustworthy server (e.g. handled by a trustworthy company which values its reputation), but some of the clients are malicious (e.g. one of the millions of users is a hacker who wants to attack the system). What can possibly go wrong in such a scenario? This leads us to another type of attack: model poisoning.

Poisoning

Since the gradients sent to the server are controlled by users, they can modify these gradients to affect the resulting model in some way. How much can they affect it? It turns out that it is not hard to inject many things into the model in a non-detectable way, see e.g. this article. These types of attacks are called model poisoning. One of the common types of poisoning is making the model behave however the attacker wants on a certain slice of data (e.g. in the context of keyboard auto-completion, one can make the model auto-complete offensive words to their friends' name). Such types of poisoning can be especially dangerous for decision-making models.

However, combined with the data leakage attacks, poisoning enables new use-cases:

  • A malicious client poisons the model to make it easy to reconstruct the training examples from huge amounts of gradients;
  • They wait for the model to get updated and save the updated model weights they get from the server;
  • They wait for one more version of the model;
  • They compute the delta between the two versions, and reconstruct private training examples from it.

This way, the leakage is even worse than without federated learning: not only the server, but also the clients get other clients' private data.

Mitigations

There is a lot of ongoing research happening on mitigating the shortcomings of federated learning. One of the most promising directions is applying differential privacy to gradients: if one can provably make gradients not affected by any single training example, this would make these examples unidentifiable. The problem with this approach is that the amount of noise needed to achieve differential privacy breaks real-life non-trivial models. So far this approach is still impractical, but the amount of ongoing research gives some hope.

Another (even more experimental) angle of attack is to avoid sending gradients altogether. The idea is for clients to train local models, and then share the predictions of their local models on other clients' data. This approach suffers from the same problem as the canonical FL - there are no guarantees, so it is likely that this can be attacked.

Finally, many FL researchers advertise the possibility of combining FL and technologies which provide theoretical privacy guarantees, e.g. SMPC and Homomorphic Encryption. This loses the advantage of low compute overhead, but gives some actual privacy. Pyte’s SMPC engine supports training models for a small number of data owners, so the added value of FL comes from scaling to train the models across millions of devices.

Popular articles

General

What We Plan To Do With The $5 Million We Raised

Our latest funding milestone will enable us to expand into highly regulated sectors.

Brynn Moynihan
June 12, 2024
Newsroom

Pyte Announces $5M in Funding To Advance Private and Secure Data Utilization and Collaboration For Every Enterprise

The latest funding will accelerate the commercialization of Pyte’s secure computation tech for data utilization and collaboration

June 12, 2024
General

Pyte’s Hot Take: Snowflake Data Hacks and Consumer Privacy

Standard access management is not enough to protect data. Snowflake's recent hack is just another example.

June 11, 2024