Chapter 15. Self-organizing maps and locally linear embedding

 

This chapter covers

  • Creating self-organizing maps to reduce dimensionality
  • Creating locally linear embeddings of high-dimensional data

In this chapter, we’re continuing with dimension reduction: the class of machine learning tasks focused on representing the information contained in a large number of variables, in a smaller number of variables. As you learned in chapters 13 and 14, there are multiple possible ways to reduce the dimensions of a dataset. Which dimension-reduction algorithm works best for you depends on the structure of your data and what you’re trying to achieve. Therefore, in this chapter, I’m going to add two more nonlinear dimension-reduction algorithms to your ever-growing machine learning toolbox: self-organizing maps (SOMs) and locally linear embedding (LLE).

15.1. Prerequisites: Grids of nodes and manifolds

Both the SOM and LLE algorithms reduce a large dataset into a smaller, more manageable number of variables, but they work in very different ways. The SOM algorithm creates a two-dimensional grid of nodes, like grid references on a map. Each case in the data is placed into a node and then shuffled around the nodes so that cases that are more similar to each other in the original data are put close together on the map.

15.2. What are self-organizing maps?

 
 

15.3. Building your first SOM

 

15.4. What is locally linear embedding?

 
 
 

15.5. Building your first LLE

 
 
 
 

15.6. Building an LLE of our flea data

 
 
 

15.7. Strengths and weaknesses of SOMs and LLE

 
 
 
 

Summary

 

Solutions to exercises

 
 
 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage