2 Tuning for a Specific Domain

 

This chapter covers

  • How to prepare data for LLM customization.
  • Retrieval Augmented Generation basics.
  • How to Fine Tune an LLM.
  • Which alternatives are available to fine tuning.

Chapter 1 introduced the core topic and intentions of this book. In this one you will learn how to customize some of the most popular Open Source foundation models on your own data. This is the only chapter that details the tuning process: the main focus of most part of the book is on inference.

2.1 Data Preparation

Fine-tuning a Transformer model for a given task involves preparing your custom dataset in a format suitable for the model's training process. Here I am going to present two examples, one for a encoder-only model (BERT) and the other one for a decoder-only model (GPT-2), in Python for PyTorch, both using the HF’s Transformers library (https://github.com/huggingface/transformers), to demonstrate that through this API the overall flow remains pretty much the same, with very little customizations depending on the chosen model architecture and task. The final subsection of this section shows how to prepare your data in those cases where Retrieval Augmented Generation (RAG) must be preferred to fine tuning.

2.1.1 Data Preparation for BERT Fine Tuning

2.1.2 Data Preparation for GPT Fine Tuning

2.1.3 Data Preparation for RAG

2.2 Retrieval Augmented Generation

2.3 Fine tuning

2.4 LoRA

2.5 RAG or fine tuning?

2.6 Summary