# Ollama
A free and open source engine that can run [[Large Language Models (LLMs)]] locally.
## Installation
Download from the official Website: https://ollama.com/download. For example, on Linux, you just need to run the following command to get up and running: `curl -fsSL https://ollama.com/install.sh | sh`
Once installed, Ollama will run a server on `http://localhost:11434`, and will serve any model you have installed over that API endpoint, making it a breeze for compatible tools to interact with those (e.g., the [[Companion plugin for Obsidian]]).
TIP: If you want to install/use Ollama from WSL on Windows, you'll need to enable systemd by modifying the `/etc/wsl.conf` file to add `systemd=true` under the `[boot]` section, as explained here: https://learn.microsoft.com/en-us/windows/wsl/systemd#how-to-enable-systemd
## Installing models
Once Ollama is installed on your machine, you can run the following command to install a model: `ollama pull <model name>`.
For example: `ollama pull gemma2:9b`
## References
- Official Website: https://ollama.com/
- List of "supported" AI Models: https://ollama.com/search
- Blog: https://ollama.com/blog
- Source code: https://github.com/ollama/ollama
- Discord community: https://discord.com/invite/ollama
## Incoming links
<!-- QueryToSerialize: LIST FROM [[Ollama]] WHERE public_note = true SORT file.name ASC -->
<!-- SerializedQuery: LIST FROM [[Ollama]] WHERE public_note = true SORT file.name ASC -->
- [[33.02 Content]]
- [[AI (MoC)]]
- [[Companion plugin for Obsidian]]
- [[Large Language Models (LLMs)]]
- [[Tools (MoC)]]
- [[Windows Subsystem for Linux (WSL)]]
<!-- SerializedQuery END -->