12 Fine-tuning LLMs with business domain knowledge

This chapter covers

  • The fine-tuning process for an LLM
  • Preparing a data set to use for fine-tuning
  • Using fine-tuning tools to better understand the process

Although the effects of large language models (LLMs) in various industries have been covered extensively in the mainstream media, the ubiquity and popularity of LLMs have contributed to a quiet revolution in the AI open source community. Through the spirit of open collaboration and the support of big technology companies, the ability to fine-tune AI models has become increasingly more accessible to AI enthusiasts. This opportunity has resulted in a vibrant community that is experimenting and sharing a wide range of processes and tools that can be used to better understand how fine-tuning works and how we can tune models ourselves or in teams.

12.1 Exploring the fine-tuning process

12.1.1 A map of the fine-tuning process

12.1.2 Goal setting

12.2 Executing a fine-tuning session

12.2.1 Preparing data for training

12.2.2 Preprocessing and setup

12.2.3 Working with fine-tuning tools

12.2.4 Setting off a fine-tuning run

12.2.5 Testing the results of a fine-tune

12.2.6 Lessons learned with fine-tuning

Summary