11 Running on Your Laptop
This chapter covers:
- The reasons for using a personal SLM-based local assistant.
- How to use SLMs locally, also offline, with the Ollama serve engine.
- How to use SLMs locally, also offline, with the LM Studio desktop application.
- How to use SLMs locally, also offline, with the Jan local assistant application.
Chapter 10 presented diverse frameworks to serve private and domain specific SLMs behind endpoints, to be deployed at scale and be used by client applications, and also alternatives to deploy SLMs on edge devices, such as Android phones (typically having limited computational resources, if compared to a backend deployment in a privately or cloud hosted cluster). This chapter explores alternatives to run (completely offline too) personal SLM-based assistants or productivity tools directly on a laptop.