1 Introduction to Prompt Engineering
This chapter covers
- Prompt Engineering basics and its importance.
- Large Language Models (LLMs) and how they work.
- Single and multi-modal prompting.
- How does prompt engineering vary by LLMs?
- What languages and tools are great for Prompt Engineering?
Prompt engineering is the process of crafting effective prompts for Artificial Intelligence (AI) models to obtain desired results. In many ways, Prompt Engineering is like building software using data structures and algorithms. The better the data structure and the applicable algorithms you use for a given problem, the better your software can utilize the underlying compute infrastructure and deliver the best performance possible. Along the same lines, a well-designed or engineered prompt will help unlock the true potential of any LLM (Large Language Model).
Prompt engineering aims to enable users to communicate and illustrate the question or problem statement, structure the prompt by defining the context, and then use an LLM to test the prompt and its expected response. The LLM chosen can affect the output generated and the inputs required to interact. This is particularly important because every Large Language Model has its own tradeoffs. Some are best suited for academia, others for general purposes, and others for mathematics, cybersecurity, etc.