1 Improving LLM accuracy

 

This chapter covers

  • Large language models
  • Limitations of large language models
  • Shortcomings of continuously finetuning a model
  • Retrieval-augmented generation
  • Combining structured and unstructured data to support all types of questions

Large language models (LLMs) have shown impressive abilities across a variety of domains, but they have significant limitations that affect their utility, particularly when tasked with generating accurate and up-to-date information. One widely adopted approach to addressing these limitations is retrieval-augmented generation (RAG), a workflow that combines an LLM with an external knowledge base to deliver accurate and current responses. By pulling data from trusted sources at run time, RAG can significantly reduce, though not completely eliminate, hallucinations, one of the most persistent challenges with LLMs. In addition, RAG allows systems to seamlessly bridge general knowledge with niche, domain-specific information that may not be well represented in the pretraining data of the model. Despite these advantages, RAG implementations have often focused solely on unstructured data, overlooking the potential of structured sources like knowledge graphs.

1.1 Introduction to LLMs

1.2 Limitations of LLMs

1.2.1 Knowledge cutoff problem

1.2.2 Outdated information

1.2.3 Pure hallucinations

1.2.4 Lack of private information

1.3 Overcoming the limitations of LLMs

1.3.1 Supervised finetuning

1.3.2 Retrieval-augmented generation

1.4 Knowledge graphs as the data storage for RAG applications

Summary