3 Submitting prompts for generation

 

This chapter covers

  • Defining prompt templates
  • Providing context
  • Formatting response output
  • Streaming responses
  • Accessing response metadata

In the first chapter, you created a very simple Spring AI application that receives a question in a POST request and submits it directly to an LLM via the injected ChatClient. It worked well, but as your Generative AI requirements get more advanced, so will the prompts you send to the LLMs. As your prompts get more sophisticated, a String-based prompt may not do.

Also, there’s more to a generated response than just a basic String response. There may be useful metadata in the response, including usage data to help you gauge how much each generation impacts billing. Responses can also stream back to the client pieces at a time rather than all at once.

In this chapter you’re going to take your prompt and response handling to the next level. Let’s start by looking at how to define prompt templates.

3.1 Working with prompt templates

Spring AI offers the ability to create prompts from templates. The templates will have one or more placeholders placed among static text. As illustrated in figure 3.1, these templates can have their placeholders filled with model data that will vary from invocation to invocation to generate the prompt sent to the LLM. The model data is filled into placeholders, surrounded by prompt text to guide the LLM in how it should respond.

3.1.1 Defining a prompt template

 
 

3.1.2 Importing the template as a Resource

 

3.2 Stuffing the prompt with context

 

3.3 Assigning prompt roles

 

3.4 Influencing response generation

 
 

3.4.1 Formatting response output

 
 

3.4.2 Streaming the response

 

3.5 Working with response metadata

 
 

3.6 Summary

 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage