3 The OpenAI Python library

 

This chapter covers

  • Installing the OpenAI library
  • Invoking GPT models using Python
  • Configuration parameters

In the last chapter, we used GPT models via the OpenAI web interface. This works well as long as we’re just trying to have a conversation or classify and summarize single reviews. However, imagine trying to classify hundreds of reviews. In that case, using the web interface manually for each review becomes very tedious (to say the least). Also, perhaps we want to use a language model in combination with other tools. For instance, we might want to use GPT models to translate questions to formal queries and then seamlessly execute those queries in the corresponding tool (without having to manually copy queries back and forth between different interfaces). In all these scenarios, we need a different interface.

In this chapter, we’ll discuss a Python library from OpenAI that lets you call OpenAI’s language models directly from Python. This enables you to integrate calls to language models as a subfunction in your code. We will be using this library in most chapters of the book. Therefore, it makes sense to at least skim this chapter before proceeding to the following chapters.

Although the current chapter focuses on OpenAI’s Python library, the libraries offered by other providers of language models (including Anthropic, Cohere, and Google) are similar.

3.1 Prerequisites

3.2 Installing OpenAI’s Python library

3.3 Listing available models

3.4 Chat completion

3.5 Customizing model behavior

3.5.1 Configuring termination conditions

3.5.2 Configuring output generation

3.5.3 Configuring randomization

3.5.4 Customization example

3.5.5 Further parameters

Summary