chapter ten

10 Natural Language Processing with TensorFlow: Language Modelling

 

This chapter covers,

  • Implementing a TensorFlow data pipeline that can generate inputs and targets from free text for a language modelling task
  • Implementing a GRU based language model
  • Defining and perplexity metric in TensorFlow and understand how to interpret it
  • Training the language model on a text corpus
  • Defining an inference model to generate text based on the trained GRU model
  • Implementing beam-search to uplift the quality of generated text

In the last chapter, we discussed an important NLP task called sentiment analysis. In that, you used a dataset of video game reviews and trained a model to predict whether a review carries a negative or positive sentiment, by analysing the text. You learned about various preprocessing steps you can perform to improve the quality of the text such as removing stopwords and lemmatizing (i.e. converting words to a base form – e.g. plural to singular). As the model, you used a special type of models known as long short-term memory (LSTM) models. LSTM models can process sequences such as sentences and learn the relationships and dependencies in them to produce an outcome. LSTM models do this by maintaining a state (or memory) containing information about the past, as it processes a sequence one element at a time. The LSTM model can use the memory of past inputs it has seen as well as the current input to produce an output at any given time.

10.1 Processing the data

10.1.1 What is language modelling?

10.1.2 Downloading and playing with data

10.1.3 Too large vocabulary? N-grams to the rescue

10.1.4 Tokenizing text

10.1.5 Defining a tf.data pipeline

10.2 GRUs in Wonderland: Generating text with deep learning

10.3 Measuring quality of the generated text

10.4 Training and evaluating the language model

10.5 Generating new text from the language model – Greedy decoding

10.6 Beam search: Enhancing the predictive power of sequential models

10.7 Summary