A guide to installing Ollama and connecting it to Open WebUI on Linux, Windows, and macOS.
graph LR
A[User / Browser] -->|http://localhost:8080| B[Open WebUI]
B -->|http://localhost:11434| C[Ollama Server]
C --> D[LLM Models]
C --> E[GPU / CPU]flowchart TD
Start([Choose Platform]) --> Linux
Start --> Windows
Start --> macOS
Linux --> L1[curl install script]
L1 --> L2[systemctl enable ollama]
L2 --> Pull[ollama pull model]
Windows --> W1[Download installer / winget]
W1 --> W2[Runs as tray app]
W2 --> Pull
macOS --> M1[Download .dmg / brew install]
M1 --> M2[Runs as menu bar app]
M2 --> Pull
Pull --> Verify[ollama list]
Verify --> WebUI[Install Open WebUI]
WebUI --> Done([Access localhost:8080])curl -fsSL https://ollama.com/install.sh | shEnable and start the service:
sudo systemctl enable ollama
sudo systemctl start ollamaGPU support:
nvidia-smi.rocminfo.Environment variables — Edit the systemd override:
sudo systemctl edit ollamaAdd under [Service]:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"Then reload and restart:
sudo systemctl daemon-reload
sudo systemctl restart ollamaOption A — Download the installer from ollama.com/download.
Option B — Use winget:
winget install Ollama.OllamaOllama runs as a system tray application. GPU support for NVIDIA (CUDA) and AMD (ROCm) is auto-detected.
Setting environment variables:
[Environment]::SetEnvironmentVariable("OLLAMA_HOST", "0.0.0.0", "User")
[Environment]::SetEnvironmentVariable("OLLAMA_ORIGINS", "*", "User")After setting variables, fully quit Ollama from the system tray and relaunch.
Verify it's listening correctly:
netstat -an | findstr 11434
# Should show: 0.0.0.0:11434Option A — Download the .dmg from ollama.com/download.
Option B — Use Homebrew:
brew install ollamaOllama runs as a menu bar application. Apple Silicon (M1–M4) is natively supported.
Setting environment variables:
launchctl setenv OLLAMA_HOST "0.0.0.0"
launchctl setenv OLLAMA_ORIGINS "*"Restart the Ollama app after setting variables.
ollama pull llama3.1Verify:
ollama listTest:
ollama run llama3.1 "Hello, world"Other popular models: gemma4, mistral, qwen3, deepseek-r1, phi4.
Requires Python 3.11+.
python3 -m venv ~/open-webui-venv
source ~/open-webui-venv/bin/activate # Linux/macOS
# .\open-webui-venv\Scripts\activate # Windows PowerShell
pip install open-webui
open-webui servecurl -LsSf https://astral.sh/uv/install.sh | sh
DATA_DIR=~/.open-webui uvx --python 3.11 open-webui@latest servedocker run -d -p 3000:8080 \
--add-host=host.docker.internal:host-gateway \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:mainOpen http://localhost:8080 (pip/uvx) or http://localhost:3000 (Docker).
The first user to register becomes the admin.
flowchart TD
A{Same machine?} -->|Yes| B[Default: http://localhost:11434]
A -->|No| C{Network setup}
C --> D[WSL2 → Windows Host]
C --> E[Remote server]
D --> F["Get host IP:\nip route show | grep default | awk '{print $3}'"]
F --> G[Set OLLAMA_HOST=0.0.0.0 on Windows]
G --> H[Allow firewall port 11434]
H --> I["Use http://HOST_IP:11434"]
E --> J[Set OLLAMA_HOST=0.0.0.0]
J --> K[Open port 11434]
K --> I
B --> Done([Connected])
I --> DoneOpen WebUI auto-connects to http://localhost:11434. No config needed.
OLLAMA_HOST=0.0.0.0 and restart Ollama.ip route show | grep default | awk '{print $3}'New-NetFirewallRule -DisplayName "Ollama" -Direction Inbound -LocalPort 11434 -Protocol TCP -Action Allowhttp://<HOST_IP>:11434Or set the environment variable before launching:
export OLLAMA_BASE_URL=http://:11434
open-webui serve Same as above — ensure OLLAMA_HOST=0.0.0.0 is set on the server and the firewall allows port 11434.
sequenceDiagram
participant User
participant WebUI as Open WebUI
participant Ollama
User->>WebUI: Open http://localhost:8080
WebUI->>Ollama: GET /api/tags
Ollama-->>WebUI: List of models
WebUI-->>User: Model selector populated
User->>WebUI: Send prompt
WebUI->>Ollama: POST /api/chat
Ollama-->>WebUI: Streamed response
WebUI-->>User: Display responsecurl http://localhost:11434/api/tags| Problem | Cause | Fix |
|---|---|---|
connection refused on WSL2 |
Ollama bound to 127.0.0.1 | Set OLLAMA_HOST=0.0.0.0, restart Ollama |
| Env var not taking effect | Ollama not fully restarted | Get-Process ollama* | Stop-Process -Force, relaunch |
| Firewall blocking | Port 11434 not allowed | Add inbound rule for TCP 11434 |
| No models in dropdown | Models not pulled | ollama pull llama3.1 |
externally-managed-environment |
System Python restriction | Use python3 -m venv or uvx |
| Open WebUI slow | CPU inference | Install GPU drivers (CUDA/ROCm) |
graph TD
subgraph Ollama
A[Install] --> B[Pull Model]
B --> C[Serve on 0.0.0.0:11434]
end
subgraph Open WebUI
D[pip install open-webui] --> E[open-webui serve]
E --> F[http://localhost:8080]
end
C -->|API| F| Command | Description |
|---|---|
ollama pull <model> |
Download a model |
ollama list |
List installed models |
ollama run <model> |
Chat in terminal |
ollama serve |
Start server manually |
ollama rm <model> |
Remove a model |
open-webui serve |
Start Open WebUI |