concept error function in category machine learning

appears as: n error function, error function, error functions, An error function, The error function, error functions
Grokking Machine Learning MEAP V09

This is an excerpt from Manning's book Grokking Machine Learning MEAP V09.

In most machine learning books, each algorithm is explained in a very formulaic way, normally with an error function, another formula for the derivative of the error function, and a process that will help us minimize this error function in order to get to the solution. These are the descriptions of the methods  that work well in the practice, but explaining them with formulas is the equivalent of teaching someone how to drive by opening the hood and frantically pointing at different parts of the car, while reading their descriptions out of a manual. This doesn’t show what really happens, which is, the car moves forward when we press the gas pedal, and stops when we hit the breaks. In this book, we study the algorithms in a different way. We do not use error functions and derivatives. Instead, we look at what is really happening with our data, and how is it that we are modeling it.

Figure 5.5. In the horizontal axis we see the probability of a point being predicted its label (wether it’s happy or sad). In the vertical axis we see the error. Notice that well classified points lie towards the right, as they probability that they are their label is high, whereas poorly classified points lie towards the left. We would like an error function that looks like this graph, namely, that assigns high values to the poorly classified points in the left, and low values to the correctly classified points in the right.

Now that we have created an error function that takes care of classification errors, we need to build one that takes care of the distance between the two lines. In this section we build a surprisingly simple error function which is large when the two lines are close, and small when the lines are far.

This error function is so simple that you have already seen it before; it is the regularization term. More specifically, if our lines have equations and , then the error function is . Why so? We’ll make use of the following fact. The perpendicular distance between the two lines is precisely , as illustrated in Figure 9.5.

sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage
test yourself with a liveTest