6 Getting started with Langchain.js
This chapter covers
- Using LangChain.js core features to buildcontext-aware AI applications
- Utilizing PromptTemplate, few-shot learning, and other strategies for prompt management
- Chaining calls to improve generative AI responses
- Preparing and storing information for efficient document ingestion and retrieval
- Leveraging memory components in LangChain.js to remember conversation history
- Integrating LangChain.js with the Vercel AI SDK
As we have explored using LLMs for generating content, we've primarily focused on single interactions. In these scenarios, we send a prompt along with relevant context history, parse the response, and then display it to the end user. However, what if we want a more flexible approach that accommodates more complex scenarios?
Imagine needing to engage in multiple interactions with LLMs before arriving at a final output. Perhaps we want to utilize various tools and combine their responses into a structured format. This need arises because LLM responses are often not in the desired format or may lack the accuracy required for real-life applications.
In many cases, we find ourselves needing to implement quality control steps that could involve re-engaging with the LLMs to refine or validate the information provided. This process of "chaining" calls—where multiple interactions with LLMs and other tools are orchestrated--is essential for creating robust and reliable AI-driven solutions.