5 Fine-tuning Foundational Models on AWS
This chapter covers
- An overview of fine-tuning foundational models
- Fine-tuning with instruction on Amazon Bedrock and Amazon SageMaker
- Creating a dataset for fine-tuning
- Fine-tuning with Llama 2 on Amazon Bedrock
- Reinforcement learning
Fine-tuning foundational models is crucial for enhancing their capabilities, especially in new domains and specific tasks. While pre-trained models, such as those found in natural language processing (NLP) and artificial intelligence, demonstrate strong general performance, they may struggle with domain-specific language, vocabulary, or unique patterns. Fine-tuning helps these models specialize in particular domains, improving their accuracy and efficiency. Additionally, fine-tuning allows models to perform tasks more accurately, especially in cases where the foundational model’s general training may not fully align with the specific requirements of the new task. It also introduces cost savings at inference time by reducing the need for long prompts and numerous examples, as is the case with few-shot learning. This can significantly streamline operations without compromising performance.