10 Creating a coding copilot project: Integrating an LLM service into VS Code with RAG
This chapter covers
- Deploying a coding model to an API
- Setting up a VectorDB locally and using it for a retrieval-augmented generation system
- Building a VS Code extension to use our LLM service
- Insights and lessons learned from the project
Progress doesn’t come from early risers—progress is made by lazy men looking for easier ways to do things.
If you touch code for your day job, you’ve probably dreamed about having an AI assistant helping you out. In fact, maybe you already do. With tools like GitHub Copilot out on the market, we have seen LLMs take autocomplete to the next level. However, not every company is happy with the offerings on the market, and not every enthusiast can afford them. So let’s build our own!
In this chapter, we will build a Visual Studio Code (VS Code) extension that will allow us to use our LLM in the code editor. The editor of choice will be VS Code, as it is a popular open source code editor. Popular might be an understatement, as the Stack Overflow 2023 Developer Survey showed it’s the preferred editor for 81% of developers.1 It’s essentially a lightweight version of Visual Studio, which is a full IDE that’s been around since 1997.