This chapter covers
- Understanding the transformer architecture
- Using the Hugging Face Transformers library
- Using the
pipeline()function in the Transformers library - Performing NLP tasks using the Transformers library
You’ve had a glimpse of the Hugging Face Transformers library and seen how to use it to perform object detection using one of the pretrained models hosted by Hugging Face. Now we will go behind the scenes to learn about the transformers package: the transformer architecture and the various components that make it work. The aim of this book is not to dive into the detailed workings of the transformer model, but I want to discuss it briefly so that you have some basic understanding of how things work.
Next, we will use the pipeline() function that ships with the transformers package to perform various natural language processing (NLP) tasks such as text classifications, text generation, and text summarization.
NOTE When I talk about Transformers, I’m referring to the open source library created by Hugging Face that provides pretrained transformer models and tools for NLP tasks. Transformer, on the other hand, refers to the neural network architecture discussed in section 3.1.