7 Augmenting intent data with generative AI

 

This chapter covers

  • Creating new training and testing examples with generative AI
  • Identifying gaps in your current conversational AI data
  • Use LLMs to build new intents in your conversational AI

Conversational AI users are frustrated when the AI does not understand them — especially if this happens multiple times! This applies to all conversational AI types including question-answering, process-oriented bots, and routing agents. We’ve seen multiple strategies to improve the AI’s ability to understand. The first strategy — improving intent training manually (Chapter 5) — gives full control to the human builder but takes time and specialized skill. The second strategy — retrieval-augmented generation (RAG, Chapter 6) — gives much more control to generative AI, reducing the role of the human builder over time. This chapter introduces a hybrid approach where generative AI augments the builder. This applies to rules-based or generative AI-based systems.

Using generative AI as a “muse” for the human builder has several benefits:

  • Generative AI reduces the time effort of the human builder.
  • Generative AI increases the amount of test data available for data science activities.
  • Giving the human builder the final say eliminates most opportunities for hallucinations (when AI says something that “looks reasonable” but is not true).

7.1 Getting started

7.1.1 Why to do it: pros and cons

7.1.2 What you need

7.1.3 How to use the augmented data

7.1.4 Exercises

7.2 Harden your existing intents

7.2.1 Getting creative with synonyms

7.2.2 Generate new grammatical variations

7.2.3 Building strong intents from LLM output

7.2.4 Create even more examples with templates

7.2.5 Exercises

7.3 Getting more creative

7.3.1 Brainstorm additional intents

7.3.2 Check for confusion

7.4 Exercises

7.5 Summary