Chapter 10. Adversarial examples

 

This chapter covers

  • A fascinating research area that precedes GANs and has an interwoven history
  • Deep learning approaches in a computer vision setting
  • Our own adversarial examples with real images and noise

Over the course of this book, you have come to understand GANs as an intuitive concept. However, in 2014, GANs seemed like a massive leap of faith, especially for those unfamiliar with the emerging field of adversarial examples, including Ian Goodfellow’s and others’ work in this field.[1] This chapter dives into adversarial examples—specially constructed examples that make other classification algorithms fail catastrophically.

1 See “Intriguing Properties of Neural Networks,” by Christian Szegedy et al., 2014, https://arxiv.org/pdf/1312.6199.pdf.

We also talk about their connections to GANs and how and why adversarial learning is still largely an unsolved problem in ML—an important but rarely discussed flaw of the current approaches. That is true even though adversarial examples have an important role to play in ML robustness, fairness, and (cyber)security.

10.1. Context of adversarial examples

10.2. Lies, damned lies, and distributions

10.3. Use and abuse of training

10.4. Signal and the noise

10.5. Not all hope is lost

10.6. Adversaries to GANs

10.7. Conclusion

Summary