7 Augmenting intent data with generative AI
This chapter covers
- Creating new training and testing examples with generative AI
- Identifying gaps in your current conversational AI data
- Use LLMs to build new intents in your conversational AI
Conversational AI users are frustrated when the AI does not understand them — especially if this happens multiple times! This applies to all conversational AI types including question-answering, process-oriented bots, and routing agents. We’ve seen multiple strategies to improve the AI’s ability to understand. The first strategy — improving intent training manually (Chapter 5) — gives full control to the human builder but takes time and specialized skill. The second strategy — retrieval-augmented generation (RAG, Chapter 6) — gives much more control to generative AI, reducing the role of the human builder over time. This chapter introduces a hybrid approach where generative AI augments the builder. This applies to rules-based or generative AI-based systems.
Using generative AI as a “muse” for the human builder has several benefits:
- Generative AI reduces the time effort of the human builder.
- Generative AI increases the amount of test data available for data science activities.
- Giving the human builder the final say eliminates most opportunities for hallucinations (when AI says something that “looks reasonable” but is not true).