This chapter covers
- Building evolving augmenting topological networks
- Visualizing a NeuroEvolution of Augmenting Topologies network
- Exercising the capabilities of NeuroEvolution of Augmenting Topologies
- Exercising NeuroEvolution of Augmenting Topologies to classify images
- Uncovering the role of speciation in neuroevolution
Over the course of the last couple of chapters, we explored the evolutionary optimization of generative adversarial and autoencoder networks. Much like our previous chapters, in those exercises, we layered or wrapped evolutionary optimization around DL networks. In this chapter, we break from distributed evolutionary algorithms in Python (DEAP) and Keras to explore a neuroevolutionary framework called NeuroEvolution of Augmenting Topologies (NEAT).
NEAT was developed by Ken Stanley in 2002 while at the University of Texas at Austin. At the time, GAs (evolutionary computation) and DL (advanced neural networks) were equals, and both were considered the next big things in AI. Stanley’s NEAT framework captured the attention of many because it combined neural networks with evolution to not just optimize hyperparameters, weight parameters, and architecture but the actual neural connections themselves.