3 OpenAI’s GPT Models: An Overview

 

This chapter covers

  • An overview of OpenAI’s GPT models
  • The basics of model parameters
  • Using the OpenAI API and OpenAI Playground
  • Interacting with the API with basic R
  • Leveraging OpenAI LLM features and extensions, such as function calling and embeddings

OpenAI’s Generative Pre-trained Transformer (GPT) models are at the forefront of generative AI, enabling advanced natural language processing and code generation. Built on the transformer architecture and trained on extensive datasets, these models encompass a vast knowledge base, including writing (R) code, data analytics, and the essentials of mathematics and statistics.

This chapter sets the stage for leveraging these GPT models throughout the book, focusing on two main objectives:

3.1 The GPT Family

3.1.1 GPT-4 and GPT-4 Turbo

3.1.2 GPT-3.5 (Turbo)

3.1.3 GPT base

3.2 Detailed Model Information

3.3 GPT Model Parameters

3.3.1 Maximum Length

3.3.2 Stop Sequences

3.3.3 Frequency Penalty

3.3.4 Presence Penalty

3.3.5 Temperature

3.3.6 Top-p (Nucleus Sampling)

3.3.7 Temperature vs. Top-p (in the coding context)

3.4 Playground and API

3.4.1 Playground tour

3.4.2 Testing prompt and parameters

3.4.3 Interacting with the OpenAI API

3.4.4 Parameters for API Calls to the Chat Endpoint

3.4.5 Structure of the Response Object

3.5 Function Calling: Advanced Interactions with OpenAI’s Models

3.5.1 Practical Applications of Function Calling

3.5.2 The Process of Function Calling

3.5.3 OpenAI Function Calling with R

3.6 Embeddings

3.6.1 Introducing Embeddings Through a Simple Example

3.6.2 Understanding Embeddings and Vector Databases

3.6.3 Embeddings in OpenAI’s API

3.7 Summary

3.8 References

sitemap