3 Using Hugging Face Transformers and Pipelines for NLP Tasks

 

This chapter covers

  • The Transformer architecture
  • Using the Hugging Face Transformers library
  • Using the pipeline function in the Transformers library
  • Performing NLP tasks using the Transformers library

You have had a glimpse of the Hugging Face Transformers library and how to use it to perform object detection using one of the pre-trained models hosted by Hugging Face. Now, we will first go behind the scenes and learn about the Transformers package – the Transformer architecture and the various components that make it work. The aim of this book is not to dive into the detailed workings of the Transformer model, but we want to briefly discuss it so that you have some basic understanding of how things work.

Next, we will make use of the pipeline() function that ships with the transformers package to perform the various NLP tasks, such as text classifications, text generation, text summarization and more.

3.1 Introduction to the Transformer Architecture

3.1.1 Tokenization

3.1.2 Token Embeddings

3.1.3 Positional Encoding

3.1.4 Transformer Block

3.1.5 Softmax

3.2 What are Hugging Face Transformers?

3.2.1 What are Pre-trained Transformers Models?

3.2.2 What are Transformers Pipelines?

3.2.3 Using the Models Directly

3.2.4 Using Transformers Pipelines

3.3 Using Transformers for Natural Language Processing (NLP) Tasks

3.3.1 Text Classification

3.3.2 Text Generation

3.3.3 Text Summarization

3.3.4 Text Translation

3.3.5 Zero-Shot Classification

3.3.1 Question Answering (QA) Tasks

3.4 Summary