8 Large Language Model Applications: Building an interactive experience

 

This chapter covers

  • How to build an interactive application that uses an LLM service.
  • How to run LLMs on edge devices without a GPU.
  • Building LLM agents that can solve multi-step problems.

Throughout this book, we’ve taught you the ins and outs of LLMs. How to train them, how to deploy them, and in the last chapter how to build a prompt to guide a model to behave how we want them to. In this chapter, we will be putting it all together. We will show you how to build an application that can use your deployed LLM service and create a delightful experience for an actual user. The key word there is delightful. Creating a simple application is easy as we will show, but creating one that delights? Well that’s a bit more difficult. We’ll be discussing multiple features you’ll want to add to your application and why. Then, we’ll discuss different places your application may live, including building such applications for edge devices. Lastly, we’ll dive into the world of LLM agents, building applications that can fulfill a role, not just a request.

8.1 Building an Application

 
 
 
 

8.1.1 Streaming on the Frontend

 
 

8.1.2 Keep a History

 

8.1.3 Chatbot Interaction Features

 

8.1.4 Token Counting

 
 
 
 

8.1.5 RAG Applied

 

8.2 Edge Applications

 
 
 
 

8.3 LLM Agents

 
 
 

8.4 Summary

 
 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage
test yourself with a liveTest