Part 2 Supervised learning

 

In the second part of the book, we will cover supervised learning algorithms. Supervised learning algorithms contain labeled examples as part of the training dataset and consist of two main classes, depending on the discrete or continuous nature of the quantity we are trying to predict: classification and regression. Supervised learning algorithms are widely used in machine learning, and we’ll be deriving some of the most exciting algorithms from scratch, to build our experience with fundamentals and motivate the design of new ML algorithms.

In chapter 5, we’ll focus on classification algorithms and derive several classic algorithms, including the perceptron, SVM, logistic regression, naive Bayes, and decision trees.

In chapter 6, we’ll study four intriguing regression algorithms: Bayesian linear regression, hierarchical Bayesian regression, KNN regression, and Gaussian process regression. We’ll derive all algorithms from the first principles and implement them using best practices.

In chapter 7, we’ll investigate a selected set of supervised learning algorithms, including Markov models, such as page rank algorithms and hidden Markov models; imbalanced learning strategies; active learning; Bayesian optimization for hyperparameter selection; and ensemble methods. We’ll conclude with ML research, focusing on supervised learning.