chapter two

2 Building your first generative AI web application

 

This chapter covers

  • Setting up a simple generative AI web app with React
  • Interfacing with the OpenAI client
  • Introducing Next.js and adopting it as our backend service

Let’s start our journey by building a simple yet effective conversational app that demonstrates the core principles of large language model (LLM)–powered web applications. By “conversational,” I mean that users will interact with our AI app using natural language through text inputs, such as a chat message interface often found on web pages for support or help. Conversational AI can also involve voice input and spoken answers, but our initial focus will be on text-based interactions. Our app will become more versatile as we progress through this book and add more advanced functions. It will eventually be able to accept sound recordings, generate pictures, employ advanced tooling, and even handle queries related to private data. Our goal is to build a flexible app that can select the right model for the task at hand and is highly adaptable, giving us the freedom to customize its behavior.

2.1 Introducing Astra

2.2 Project goal and requirements

2.2.1 Goal: Build a simple interactive AI chat interface

2.2.2 Project and technology requirements

2.2.3 Setting up

2.2.4 Running the project

2.3 Under the hood: The generative AI lifecycle

2.4 Designing for a better user experience

2.5 Building the major components

2.5.1 Frontend

2.5.2 Autoscroll

2.5.3 ChatPage

2.5.4 ChatList

2.5.5 The backend: Handling API communication

2.5.6 Tests

2.5.7 Common challenges and solutions

2.6 Assessing the app’s first iteration

2.7 Migrating the app to Next.js

2.7.1 Setting up

2.7.2 Running the project

2.8 Routing and configuration on Next.js

2.8.1 File-based routing