chapter ten

10 Introducing customized LLMs

 

This chapter covers

  • Articulating how a lack of context impacts an LLM’s performance
  • Outlining how Retrieval augmented generation works and its value
  • Outlining how fine-tuning LLMs works and its value
  • Comparing RAG and fine-tuning approaches

10.1 The challenge with LLMs and context

10.1.1 Tokens, context windows and limitations

10.1.2 Baking in context as a solution

10.2 Embedding context further into prompts and LLMs

10.2.1 Retrieval augmented generation

10.2.2 Fine-tuning Large Language Models

10.2.3 Comparing the two approaches

10.2.4 Combining RAG and fine-tuning

10.3 Summary