14 Ask a KG with natural language

 

This chapter covers

  • Understanding the limitations of RAG in complex scenarios
  • Building an advanced question-answering system that mimics domain expertise on KGs
  • Transforming query results into meaningful, actionable summaries

In this chapter, we will explore how to build an advanced system that can answer questions effectively. Using a law enforcement example as our guide, we'll compare the RAG approach and our new "expert emulation" method for capturing the expertise of skilled information retrieval. We will walk you through the key concepts and components needed to build this system, giving you a solid foundation for creating reliable question-answering applications.

The framework we'll develop rests on several key pillars:

  • Understanding and properly routing different types of user questions
  • Extracting and representing domain knowledge in a form that LLMs can effectively utilize
  • Implementing expert-like reasoning patterns for query construction
  • Ensuring results are presented in meaningful, actionable ways

This framework is specifically designed to integrate with a front-end layer, ensuring that the question-answering system can be effectively presented to end users through a graphical interface. This integration-first approach influences many of our design decisions throughout the chapter, from how we structure query responses to how we handle data visualization.

14.1 Querying a knowledge graph in the policing domain

14.1.1 Enabling Domain Experts with Knowledge Graphs

14.2 Limitations of RAG in complex real-world scenarios

14.2.1 RAG in a law enforcement environment

14.2.2 Understanding RAG's core limitations

14.3 Schema-Based approach for querying knowledge graphs

14.3.1 Understanding and Leveraging Graph Schema

14.4 Think like an expert: leveraging metadata for enhanced querying

14.5 Intent detection: understanding user expectations

14.5.1 Classifying by Visualization Type

14.5.2 Is it data, documentation or just complaining?

14.6 From schema to LLM-ready context

14.6.1 Schema Extraction and Representation

14.6.2 Enriching Schema with Descriptive Annotations

14.6.3 A Practical Approach to Schema Representation

14.7 It’s time to think: understanding LLM reasoning

14.7.1 The Order Matters: Answer-First vs. Reasoning-First

14.7.2 Thinking in Queries: From Text to Cypher

14.7.3 Structuring Output for Reliable Query Generation

14.8 Response Summarization: From Results to Insights

14.9 Summary

14.10 References