3 Meeting the Kernel of Semantic Kernel

 

This chapter covers

  • Understanding the role of the kernel of Semantic Kernel
  • Building effective prompts and utilize them with invocation methods
  • Implementing streaming for real-time responses
  • Querying the large language models without using the kernel

In this chapter, we dive deeper into Semantic Kernel and its core components, imagining the kernel as a melting pot that brings together essential elements such as AI services and core components, enabling AI applications to leverage the large language model’s power.

You'll learn how to build a basic Semantic Kernel application, starting with a prompt - a key technique for interacting with AI models. We'll show you how to create well-designed prompts using execution settings that help you get the best responses from AI, and kernel arguments that will make your prompts dynamic and reusable.

We'll also cover some interesting advanced features, such as broadcasting real-time responses and using chat history to create context for your prompts. These tools will make your AI applications more responsive and efficient.

Finally, we will introduce you to various prompt template factories. These are like rendering tools that help you create complex prompts for more sophisticated interactions with AI models.

By the end of this chapter, you'll have a solid understanding of how to use Semantic Kernel to create efficient, dynamic, and context-aware prompts.

3.1 Dissecting the Anatomy of the Kernel

3.1.1 Understanding the Kernel's Core Components

3.1.2 Exploring the Kernel's Workflow

3.2 Initializing and Configuring the Kernel

3.2.1 Adding AI Services to the Kernel

3.2.2 Exploring Other AI Services

3.3 Mastering Prompts

3.3.1 Crafting Effective Prompts

3.3.2 Understanding the Request-Response Mechanism

3.3.3 Querying Prompt Response

3.3.4 Streaming Prompt Response

3.3.5 Exception Management and Error Handling

3.4          Integrating Multiple AI Services

3.4.1 Explicit Using of AI Services

3.4.2 Selecting AI Services using AI Service Selector

3.5 Prompting using AI Services with No Kernel

3.5.1 Generating with OpenAI Services

3.5.2 Streaming Responses

3.6 Summary