3 Running a voice-first application – and noticing issues

 

This chapter covers

  • Creating and testing a simple voice-first interaction
  • Code samples for a simple Google Assistant action
  • Introduction to the architecture that incorporates Google Assistant
  • Pros and cons of relying on tools for voice development

In the previous two chapters, you were introduced to voice interaction technology and the reasons why some things are harder to get right than others for humans and machines in conversational interactions. Now it’s time to jump in and get your own quick voice-first interaction up and running, staying in the same familiar food domain. It’s a convenient test bed for introducing the core concepts—finding a restaurant is probably something you’re familiar with, and it covers many voice-first concepts. The task of finding a restaurant seems simple, but things get complicated pretty quickly. When you expand functionality to deal with real users, you’ll stray from the ‘Happy Path’ quickly, but let’s not worry about real life yet.

3.1       Hands-on: Preparing the restaurant finder

3.2       Say hello to voice platforms

3.3       Hands-on: A Google restaurant finder action

3.3.1   Basic setup

3.3.2   Specifying a first intent

3.3.3   Doing something

3.3.6   Connecting Dialogflow to Actions on Google

3.3.8   Saving the voice interaction

3.4       Why we’re using Actions on Google and Assistant

3.5       Google’s voice development ecosystem

3.6       The pros and cons of relying on tools

3.7       Hands-on: Making changes

3.7.1   Adding more phrases

3.7.2   Eating something else

3.7.3   Asking for something more specific

3.8       What’s next?

3.9       Summary