Chapter 15. Deep learning on unseen data: introducing federated learning

 

In this chapter

  • The problem of privacy in deep learning
  • Federated learning
  • Learning to detect spam
  • Hacking into federated learning
  • Secure aggregation
  • Homomorphic encryption
  • Homomorphically encrypted federated learning

“Friends don’t spy; true friendship is about privacy, too.”

Stephen King, Hearts in Atlantis (1999)

The problem of privacy in deep learning

Deep learning (and tools for it) often means you have access to y- your training data

As you’re keenly aware by now, deep learning, being a subfield of machine learning, is all about learning from data. But often, the data being learned from is incredibly personal. The most meaningful models interact with the most personal information about human lives and tell us things about ourselves that might have been difficult to know otherwise. To paraphrase, a deep learning model can study thousands of lives to help you better understand your own.

The primary natural resource for deep learning is training data (either synthetic or natural). Without it, deep learning can’t learn; and because the most valuable use cases often interact with the most personal datsets, deep learning is often a reason behind companies seeking to aggregate data. They need it in order to solve a particular use case.

Federated learning

Learning to detect spam

Let’s make it federated

Hacking into federated learning

Secure aggregation

Homomorphic encryption

Homomorphically encrypted federated learning

Summary

sitemap