This chapter covers
- Solving part-of-speech (POS) tagging and named entity recognition (NER) using sequential labeling
- Making RNNs more powerful—multilayer and bidirectional recurrent neural networks (RNNs)
- Capturing statistical properties of language using language models
- Using language models to evaluate and generate natural language text
In this chapter, we are going to discuss sequential labeling—an important NLP framework where systems tag individual words with corresponding labels. Many NLP applications, such as part-of-speech tagging and named entity recognition, can be framed as sequential-labeling tasks. In the second half of the chapter, I’ll introduce the concept of language models, one of the most fundamental yet exciting topics in NLP. I’ll talk about why they are important and how to use them to evaluate and even generate some natural language text.