5 Sequential labeling and language modeling

 

This chapter covers

  • Solving part-of-speech (POS) tagging and named entity recognition (NER) using sequential labeling
  • Making RNNs more powerful—multilayer and bidirectional recurrent neural networks (RNNs)
  • Capturing statistical properties of language using language models
  • Using language models to evaluate and generate natural language text

In this chapter, we are going to discuss sequential labeling—an important NLP framework where systems tag individual words with corresponding labels. Many NLP applications, such as part-of-speech tagging and named entity recognition, can be framed as sequential-labeling tasks. In the second half of the chapter, I’ll introduce the concept of language models, one of the most fundamental yet exciting topics in NLP. I’ll talk about why they are important and how to use them to evaluate and even generate some natural language text.

5.1 Introducing sequential labeling

 
 
 

5.1.1 What is sequential labeling?

 
 

5.1.2 Using RNNs to encode sequences

 

5.1.3 Implementing a Seq2Seq encoder in AllenNLP

 
 
 

5.2 Building a part-of-speech tagger

 
 

5.2.1 Reading a dataset

 
 
 
 

5.2.2 Defining the model and the loss

 
 
 

5.2.3 Building the training pipeline

 
 

5.3 Multilayer and bidirectional RNNs

 
 

5.3.1 Multilayer RNNs

 
 

5.3.2 Bidirectional RNNs

 
 
 

5.4 Named entity recognition

 
 

5.4.1 What is named entity recognition?

 
 

5.4.2 Tagging spans

 

5.4.3 Implementing a named entity recognizer

 
 

5.5 Modeling a language

 
 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage