17 Clustering by finding hierarchies: hierarchical clustering

 

This chapter covers:

  • What is hierarchical clustering and why would we use it?
  • What linkage methods are and how they affect the hierarchy
  • How can we measure the stability of a clustering result?

In the previous chapter, we saw how k-means clustering finds k-centroids in the feature space and iteratively updates them to find a set of clusters. Hierarchical clustering takes a different approach and, as its name suggests, can learn a hierarchy of clusters in a dataset. Instead of getting a "flat" output of clusters, hierarchical clustering gives us a tree of clusters within clusters. As a result, hierarchical clustering gives us more insight into complex grouping structures than flat clustering methods like k-means.

The tree of clusters is built iteratively by calculating the distance between each case or cluster, and every other case or cluster in the dataset at each step. Either the case/cluster pair that are most similar to each other are merged into a single cluster, or sets of cases/clusters that are most dissimilar from each other are split into separate clusters, depending on the algorithm. I’ll introduce both approaches to you later in the chapter.

17.1  What is hierarchical clustering?

17.1.1  Agglomerative hierarchical clustering

17.1.2  Divisive hierarchical clustering

17.2  Building our first agglomerative hierarchical clustering model

17.2.1  How do I choose the number of clusters?

17.2.2  Cutting the tree to select a flat set of clusters

17.3  How stable are my clusters?

17.4  Strengths and weaknesses hierarchical clustering

17.5  Summary

17.6  Solutions to exercises

sitemap