6 Neuroevolution optimization

 

This chapter covers

  • How DL networks optimize or learn
  • Replacing backpropagation training of neural networks with GAs
  • Evolutionary optimization of neural networks
  • Employing evolutionary optimization to a Keras DL model
  • Scaling up neuroevolution to tackle image class recognition tasks

In the last chapter, we managed to get our feet wet by employing evolutionary algorithms for the purposes of optimizing DL network hyperparameters. We saw how using EA could improve the search for hyperparameters beyond simple random or grid search algorithms. Employing variations of EA, such as PSO, evolutionary strategy, and differential evolution, uncovered insights into methods used to search and for hyperparameter optimization (HPO).

Evolutionary DL is a term we use to encompass all evolutionary methods employed to improve DL. More specifically, the term neuroevolution has been used to define specific optimization patterns applied to DL. One such pattern we looked at in the last chapter was the application of evolutionary algorithms to HPO.

Neuroevolution encompasses techniques for HPO, parameter optimization (weight/ parameter search), and network optimization. In this chapter, we dive into how evolutionary methods can be applied to optimize network parameters directly, thus eliminating the need to backpropagate errors or loss through a network.

6.1 Multilayered perceptron in NumPy

6.1.1 Learning exercises

6.2 Genetic algorithms as deep learning optimizers

6.2.1 Learning exercises

6.3 Other evolutionary methods for neurooptimization

6.3.1 Learning exercises

6.4 Applying neuroevolution optimization to Keras

6.4.1 Learning exercises

6.5 Understanding the limits of evolutionary optimization

6.5.1 Learning exercises

Summary