7 Augmenting intent data with generative AI

 

This chapter covers

  • Creating new training and testing examples with generative AI
  • Identifying gaps in your current conversational AI data
  • Use LLMs to build new intents in your conversational AI

Conversational AI users are frustrated when the AI does not understand them—especially if it happens multiple times! This applies to all conversational AI types, including question-answering bots, process-oriented bots, and routing agents. We’ve seen multiple strategies for improving the AI’s ability to understand. The first strategy—improving intent training manually (chapter 5)—gives full control to the human builder, but it takes time and specialized skill. The second strategy—retrieval-augmented generation (RAG, chapter 6)—gives much more control to generative AI, reducing the role of the human builder over time. This chapter introduces a hybrid approach where generative AI augments the builder. This applies to rules-based or generative AI–based systems.

Using generative AI as a “muse” for the human builder reduces the effort and time required of the human builder, increases the amount of test data available for data science activities, and gives the human builder the final say, which eliminates most opportunities for hallucinations (which is when AI says something that looks reasonable but is not true).

7.1 Getting started

7.1.1 Why do it: Pros and cons

7.1.2 What you need

7.1.3 How to use the augmented data

7.2 Hardening your existing intents

7.2.1 Get creative with synonyms

7.2.2 Generate new grammatical variations

7.2.3 Build strong intents from LLM output

7.2.4 Creating even more examples with templates

7.3 Getting more creative

7.3.1 Brainstorm additional intents

7.3.2 Check for confusion

Summary