appendix-a

appendix A Trying out LangChain

 

A.1 Trying out LangChain in a Jupyter Notebook environment

We’ll begin with a straightforward example: completing a sentence using an OpenAI model and refining its output through prompt engineering. OpenAI’s models are a convenient starting point because they are accessible through a public REST API and require no local infrastructure setup.

If you prefer, you can use an open source LLM inference engine such as Ollama. However, because many of you may not have run an open source LLM locally before, we’ll cover that setup in detail in appendix E. For now, we’ll keep things simple and use the OpenAI REST API.

Before you proceed, make sure you have Python 3.11 or later installed on your local machine and that the following prerequisites are met:

  • You already own or generate an OpenAI key.
  • You know how to set up a Python Jupyter Notebook environment.

If you haven’t met these prerequisites and you’re unfamiliar with creating an OpenAI key, I’ll walk you through the steps next. In addition, if you need help setting up a Jupyter Notebook environment, see appendix B.

For those of you who need it, let’s quickly create an OpenAI key. Assuming you’ve registered with OpenAI, which is necessary to explore the ChatGPT examples discussed at the beginning of the chapter, follow these steps:

A.1.1 Sentence completion example

A.1.2 Prompt engineering examples

A.1.3 Creating chains and executing them with LCEL