appendix A Local AI models: An accessible alternative

 

Local AI models offer more control over your data, enhanced privacy, and a greater ability to customize AI tools for your projects. Unlike cloud-based models, which focus on ease of use and scalability, local models can be tailored to unique workflows and run without an internet connection—ideal for industries handling confidential data, like healthcare, finance, and government. By keeping data processing local, these models make it easier to comply with strict data protection regulations and meet industry-specific legal requirements.

Running local AI models does have its challenges, particularly in terms of hardware constraints. Whereas cloud models offload computational burdens to remote servers, local models need dedicated equipment to train and run efficiently. Small language models (SLMs) are a practical solution here. They’re designed to operate on limited hardware, allowing organizations to deploy AI locally without needing high-end infrastructure. In this appendix, we’ll guide you step by step through what it takes to implement local models, showing you that it’s more straightforward than you may think.

A.1 Getting your hardware ready

A.2 Tools for simplifying local AI setup

A.2.1 VS Code AI Toolkit

A.2.2 Setting up your own local AI chatbot in Visual Studio Code

A.2.3 Ollama

A.2.4 Using structured outputs with local models

A.3 Popular local AI models

A.4 Bringing it all together: Security and ethical considerations