Autoencoders are powerful tools for learning arbitrary functions that transform input into output without having the full set of rules to do so. Autoencoders get their names from their function: learning a representation of the input much smaller than its size, which means encoding input data using less knowledge and then decoding that internal representation to get approximately back to its original input. When the input is an image, autoencoders have many useful applications. Compression is one, such as using 100 neurons in a hidden layer and formatting your 2D image input in row-order format (chapter 11). With averaging for the red, green, and blue channels, the autoencoder learns a representation of the image and is able to encode a 32 × 32 × 3 height × width × channel image, or 3,072 pixel intensities into 100 numbers, which is a reduction of 30x in data. How’s that for compression? Though you trained a network to demonstrate this use case in chapter 11, you did not explore the resulting learned representation of the images, but you will in this chapter.