concept linear classifier in category machine learning

appears as: linear classifier, linear classifiers
Grokking Machine Learning MEAP V09

This is an excerpt from Manning's book Grokking Machine Learning MEAP V09.

At first glance, it looks like we won’t be able to fit a linear classifier to this data. If you try to draw a line that splits the happy and the sad faces apart, you won’t be able to. What can we do, then? We’ve learned other classifiers that can do the job, such as naive Bayes (chapter 6) or decision trees (chapter 7). But in this chapter we will stick with perceptrons and logistic regression. The reason for this is because we’ll be able to concatenate these simple classifiers and form complex architectures, which will give rise to much stronger classifiers. If our goal is to separate the points in Figure 8.2, clearly one line won’t do it. What is better than one line? I can think of two things:

Figure 8.3. The happy and the sad points in our dataset cannot be divided by one line. However, drawing two lines separates them well. Combining linear classifiers this way is the basis for neural networks.

In Chapters 4 and 5, we learned about linear classifiers. In two dimensions, these are simply defined by a line that best separates a dataset of points with two labels. However, you may have noticed that many different lines can separate a dataset, and this raises the question: How do you know which is the best line? In figure 9.1 I show you three different classifiers that separate this dataset. Which one do you prefer, classifier 1, 2, or 3?

sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage