15 Foundation models and emerging search paradigms

 

This chapter covers

  • Generative search, prompt engineering, and Retrieval Augmented Generation (RAG)
  • Integrating foundation models for results summarization, citations, and abstractive question-answering
  • Evaluating output from generative search models
  • Implementing multimodal, cross-modal, and hybrid search
  • Generate synthetic training data for search relevance
  • Emerging search paradigms the future of AI-powered search

Large Language Models, as we’ve tested and fine-tuned in the last two chapters, are front and center in the advances in AI-powered search over recent years. Between improving query interpretation and document understanding by mapping content into embeddings for dense vector search, to helping with answer extraction, you’ve already seen some of the key ways search quality can be enhanced by these models.

15.1 Understanding foundation models

15.1.1 Training vs. fine-tuning vs. prompting

15.2.1 Retrieval Augmented Generation (RAG)

15.2.2 Results summarization using foundation models

15.2.3 Data generation using foundation models

15.2.4 Evaluating generative output

15.2.5 Constructing your own metric

15.4 Other emerging AI-powered search paradigms

15.5 Convergence of contextual technologies

15.6 Hybrid search: all the above, please!

15.7 Summary