In this chapter, we dive directly into solving NLP problems. This will be a two-part exercise, spanning this chapter and the next. Our goal will be to establish a set of baselines for a pair of concrete NLP problems, which we will later be able to use to measure progressive improvements gained from leveraging increasingly sophisticated transfer learning approaches. In the process of doing this, we aim to advance your general NLP instincts and refresh your understanding of typical procedures involved in setting up problem-solving pipelines for such problems. You will review techniques ranging from tokenization to data structure and model selection. We first train some traditional machine learning models from scratch to establish some preliminary baselines for these problems. We complete the exercise in chapter 3, where we apply the simplest form of transfer learning to a pair of recently popularized deep pretrained language models. This involves fine-tuning only a handful of the final layers of each network on a target dataset. This activity will serve as a form of an applied hands-on introduction to the main theme of the book—transfer learning for NLP.