Part 3. Training and testing

 

You’ve selected a use case requiring an AI assistant and designed all aspects of the experience. Now it’s time to start constructing the assistant.

AI assistants are built on top of machine learning classification algorithms called classifiers. These classifiers learn through a training process. The classifier is trained by example. Given a set of data, the classifier infers patterns and learns to differentiate the data.

In chapter 6, you will learn how to train your assistant. Training starts with finding a suitable data source; several possible sources are described. Then this data must be curated and organized in a way suitable for the classifier to train on. Training data must have variety, volume, and veracity. Each of these “three Vs” has a big impact on the classifier.

Testing AI assistants requires two different disciplines. A data science discipline is required to test the accuracy of the assistant. A software engineering discipline is required to test the remaining functionality, particularly the dialog flows. These disciplines are isolated into separate chapters.

Chapter 7 covers the data scientist testing aspect. In this chapter, you’ll learn methodologies and metrics for testing how accurately your assistant identifies user intents. You’ll learn how to identify the most important mistakes your assistant makes.