Part 1 Introducing ML algorithms

 

Welcome to Machine Learning Algorithms in Depth. In the first part of the book, we will discuss different types of ML algorithms, motivate their implementation from first principles, and introduce two main camps of Bayesian inference: Markov chain Monte Carlo and variational inference.

In chapter 1, we’ll give reasons why we want to learn ML algorithms from scratch, introduce the subject of Bayesian inference and deep learning, and discuss algorithmic paradigms and data structures used in the software implementation of machine learning algorithms.

In chapter 2, we’ll introduce key Bayesian concepts and motivate MCMC via a series of examples, ranging from stock price estimation to Metropolis-Hastings sampling of multivariate Gaussian mixtures.

In chapter 3, we’ll focus on variational inference and, in particular, mean-field approximation applied to image denoising in the Ising model. We’ll learn to approximate the full posterior distribution using KL divergence and maximize the evidence lower bound.

In chapter 4, we’ll discuss linear, nonlinear, and probabilistic data structures as well as four algorithmic paradigms: complete search, greedy, divide and conquer, and dynamic programming. We’ll examine a few examples in each area and conclude with ML research in sampling methods and variational inference.