In the last chapter, we covered how convolutional neural network (CNN) architecture could be adapted using evolutionary algorithms. We used genetic algorithms to encode a gene sequence defining a CNN model for image classification. The outcome was successfully building more optimized networks for image recognition tasks.
In this chapter, we continue to extend the fundamentals and explore evolving autoencoders (AEs). We take some of our experience from building evolving CNN architecture in the last chapter and apply it to convolutional AEs. Then, we move on to more advanced variational AEs and explore novel ways of evolving model loss.
AEs are a foundation to DL that introduces unsupervised and representation learning. Chances are if you have spent any time studying DL, you have encountered AEs and variational AEs. From the perspective of EDL, they introduce some novel applications we explore in this chapter.