Chapter 6. Classifying with naive Bayes and support vector machines

 

This chapter covers

  • Working with the naive Bayes algorithm
  • Understanding the support vector machine algorithm
  • Tuning many hyperparameters simultaneously with a random search

The naive Bayes and support vector machine (SVM) algorithms are supervised learning algorithms for classification. Each algorithm learns in a different way. The naive Bayes algorithm uses Bayes’ rule, which you learned about in chapter 5, to estimate the probability of new data belonging to one of the classes in the dataset. The case is then assigned to the class with the highest probability. The SVM algorithm looks for a hyperplane (a surface that has one less dimension than there are predictor variables) that separates the classes. The position and direction of this hyperplane depend on support vectors: cases that lie closest to the boundary between the classes.

Note

While commonly used for classification, the SVM algorithm can also be used for regression problems. I won’t discuss how here, but if you’re interested (and want to explore SVMs in more depth generally), see Support Vector Machines by Andreas Christmann and Ingo Steinwart (Springer, 2008).

6.1. What is the naive Bayes algorithm?

 
 

6.2. Building your first naive Bayes model

 
 
 
 

6.3. Strengths and weaknesses of naive Bayes

 
 
 
 

6.4. What is the support vector machine (SVM) algorithm?

 
 

6.5. Building your first SVM model

 
 

6.6. Cross-validating our SVM model

 

6.7. Strengths and weaknesses of the SVM algorithm

 
 
 

Summary

 
 
 

Solutions to exercises

 
 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage