6 Fine-tuning for classification

 

This chapter covers

  • Introducing different LLM fine-tuning approaches
  • Preparing a dataset for text classification
  • Modifying a pretrained LLM for fine-tuning
  • Fine-tuning an LLM to identify spam messages
  • Evaluating the accuracy of a fine-tuned LLM classifier
  • Using a fine-tuned LLM to classify new data

So far, we have coded the LLM architecture, pretrained it, and learned how to import pretrained weights from an external source, such as OpenAI, into our model. Now we will reap the fruits of our labor by fine-tuning the LLM on a specific target task, such as classifying text. The concrete example we examine is classifying text messages as “spam” or “not spam.” Figure 6.1 highlights the two main ways of fine-tuning an LLM: fine-tuning for classification (step 8) and fine-tuning to follow instructions (step 9).

Figure 6.1 The three main stages of coding an LLM. This chapter focus on stage 3 (step 8): fine-tuning a pretrained LLM as a classifier.
figure

6.1 Different categories of fine-tuning

The most common ways to fine-tune language models are instruction fine-tuning and classification fine-tuning. Instruction fine-tuning involves training a language model on a set of tasks using specific instructions to improve its ability to understand and execute tasks described in natural language prompts, as illustrated in figure 6.2.

6.2 Preparing the dataset

6.3 Creating data loaders

6.4 Initializing a model with pretrained weights

6.5 Adding a classification head

6.6 Calculating the classification loss and accuracy

6.7 Fine-tuning the model on supervised data

6.8 Using the LLM as a spam classifier

Summary

sitemap