15 Foundation models and emerging search paradigms
This chapter covers
- Retrieval augmented generation (RAG)
- Generative search for results summarization and abstractive question answering
- Integrating foundation models, prompt optimization, and evaluating model quality
- Generating synthetic data for model training
- Implementing multimodal and hybrid search
- The future of AI-powered search
Large language models (LLMs), like the ones we’ve tested and fine-tuned in the last two chapters, have been front and center in the advances in AI-powered search in recent years. You’ve already seen some of the key ways search quality can be enhanced by these models, from improving query interpretation and document understanding by mapping content into embeddings for dense vector search, to helping extract answers to questions from within documents.