10 NEAT: NeuroEvolution of Augmenting Topologies

 

This chapter covers

  • Building evolving augmenting topological networks
  • Visualizing a NeuroEvolution of Augmenting Topologies network
  • Exercising the capabilities of NeuroEvolution of Augmenting Topologies
  • Exercising NeuroEvolution of Augmenting Topologies to classify images
  • Uncovering the role of speciation in neuroevolution

Over the course of the last couple of chapters, we explored the evolutionary optimization of generative adversarial and autoencoder networks. Much like our previous chapters, in those exercises, we layered or wrapped evolutionary optimization around DL networks. In this chapter, we break from distributed evolutionary algorithms in Python (DEAP) and Keras to explore a neuroevolutionary framework called NeuroEvolution of Augmenting Topologies (NEAT).

NEAT was developed by Ken Stanley in 2002 while at the University of Texas at Austin. At the time, GAs (evolutionary computation) and DL (advanced neural networks) were equals, and both were considered the next big things in AI. Stanley’s NEAT framework captured the attention of many because it combined neural networks with evolution to not just optimize hyperparameters, weight parameters, and architecture but the actual neural connections themselves.

10.1 Exploring NEAT with NEAT-Python

 
 
 

10.1.1 Learning exercises

 
 

10.2 Visualizing an evolved NEAT network

 
 

10.3 Exercising the capabilities of NEAT

 
 
 

10.3.1 Learning exercises

 

10.4 Exercising NEAT to classify images

 
 

10.4.1 Learning exercises

 
 
 
 

10.5 Uncovering the role of speciation in evolving topologies

 
 
 

10.5.1 Tuning NEAT speciation

 
 

10.5.2 Learning exercises

 
 
 

Summary

 
 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage
test yourself with a liveTest