10 NEAT: NeuroEvolution of Augmenting Topologies
This chapter covers
- Building evolving augmenting topological networks
- Visualizing a NEAT network
- Exercising the capabilities of NEAT
- Exercising NEAT to classify images
- Uncovering the role of speciation in Neuroevolution
Over the course of the last couple of chapters, we explored the evolutionary optimization of generative adversarial and autoencoder networks. Much like our previous chapters in those exercises we layered or wrapped evolutionary optimization around deep learning networks. In this chapter, we break from DEAP and Keras to explore a neuroevolutionary framework called NEAT.
Neuroevolution of augmenting topologies (NEAT) was developed by Ken Stanley in 2002 while at the University of Texas at Austin. At the time genetic algorithms (evolutionary computation) and deep learning (advanced neural networks) were equals and both were considered the next big thing in AI. Stanley’s NEAT framework captured the attention of many because it combined neural networks with evolution to not just optimize hyperparameters, weight parameters, and architecture; but the actual neural connections themselves.
Figure 10.1 shows a comparison between a regular DL network and an evolved NEAT network. In the figure, new connections have been added/removed and the position of a node removed and/or altered in the evolved NEAT network. Notice how this differs from our previous efforts of simply altering the number of nodes in a DL-connected layer.