ONLINE

Ollama + Open WebUI Setup Guide

May 3, 2026 · 5 min read · Updated May 3, 2026

A guide to installing Ollama and connecting it to Open WebUI on Linux, Windows, and macOS.


Architecture Overview

graph LR
    A[User / Browser] -->|http://localhost:8080| B[Open WebUI]
    B -->|http://localhost:11434| C[Ollama Server]
    C --> D[LLM Models]
    C --> E[GPU / CPU]

Platform Installation Flow

flowchart TD
    Start([Choose Platform]) --> Linux
    Start --> Windows
    Start --> macOS

    Linux --> L1[curl install script]
    L1 --> L2[systemctl enable ollama]
    L2 --> Pull[ollama pull model]

    Windows --> W1[Download installer / winget]
    W1 --> W2[Runs as tray app]
    W2 --> Pull

    macOS --> M1[Download .dmg / brew install]
    M1 --> M2[Runs as menu bar app]
    M2 --> Pull

    Pull --> Verify[ollama list]
    Verify --> WebUI[Install Open WebUI]
    WebUI --> Done([Access localhost:8080])

1. Installing Ollama

Linux

curl -fsSL https://ollama.com/install.sh | sh

Enable and start the service:

sudo systemctl enable ollama
sudo systemctl start ollama

GPU support:

Environment variables — Edit the systemd override:

sudo systemctl edit ollama

Add under [Service]:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"

Then reload and restart:

sudo systemctl daemon-reload
sudo systemctl restart ollama

Windows

Option A — Download the installer from ollama.com/download.

Option B — Use winget:

winget install Ollama.Ollama

Ollama runs as a system tray application. GPU support for NVIDIA (CUDA) and AMD (ROCm) is auto-detected.

Setting environment variables:

[Environment]::SetEnvironmentVariable("OLLAMA_HOST", "0.0.0.0", "User")
[Environment]::SetEnvironmentVariable("OLLAMA_ORIGINS", "*", "User")

After setting variables, fully quit Ollama from the system tray and relaunch.

Verify it's listening correctly:

netstat -an | findstr 11434
# Should show: 0.0.0.0:11434

macOS

Option A — Download the .dmg from ollama.com/download.

Option B — Use Homebrew:

brew install ollama

Ollama runs as a menu bar application. Apple Silicon (M1–M4) is natively supported.

Setting environment variables:

launchctl setenv OLLAMA_HOST "0.0.0.0"
launchctl setenv OLLAMA_ORIGINS "*"

Restart the Ollama app after setting variables.


2. Pulling Your First Model

ollama pull llama3.1

Verify:

ollama list

Test:

ollama run llama3.1 "Hello, world"

Other popular models: gemma4, mistral, qwen3, deepseek-r1, phi4.


3. Installing Open WebUI

With pip (all platforms)

Requires Python 3.11+.

python3 -m venv ~/open-webui-venv
source ~/open-webui-venv/bin/activate    # Linux/macOS
# .\open-webui-venv\Scripts\activate     # Windows PowerShell
pip install open-webui
open-webui serve

With uvx (recommended)

curl -LsSf https://astral.sh/uv/install.sh | sh
DATA_DIR=~/.open-webui uvx --python 3.11 open-webui@latest serve

With Docker (alternative)

docker run -d -p 3000:8080 \
  --add-host=host.docker.internal:host-gateway \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main

Open http://localhost:8080 (pip/uvx) or http://localhost:3000 (Docker).

The first user to register becomes the admin.


4. Connecting Open WebUI to Ollama

flowchart TD
    A{Same machine?} -->|Yes| B[Default: http://localhost:11434]
    A -->|No| C{Network setup}
    C --> D[WSL2 → Windows Host]
    C --> E[Remote server]

    D --> F["Get host IP:\nip route show | grep default | awk '{print $3}'"]
    F --> G[Set OLLAMA_HOST=0.0.0.0 on Windows]
    G --> H[Allow firewall port 11434]
    H --> I["Use http://HOST_IP:11434"]

    E --> J[Set OLLAMA_HOST=0.0.0.0]
    J --> K[Open port 11434]
    K --> I

    B --> Done([Connected])
    I --> Done

Same machine (default)

Open WebUI auto-connects to http://localhost:11434. No config needed.

WSL2 → Windows host

  1. On Windows, set OLLAMA_HOST=0.0.0.0 and restart Ollama.
  2. In WSL2, find the Windows host IP:
ip route show | grep default | awk '{print $3}'
  1. Allow through Windows Firewall (admin PowerShell):
New-NetFirewallRule -DisplayName "Ollama" -Direction Inbound -LocalPort 11434 -Protocol TCP -Action Allow
  1. In Open WebUI: Settings → Connections → Ollama Base URLhttp://<HOST_IP>:11434

Or set the environment variable before launching:

export OLLAMA_BASE_URL=http://:11434
open-webui serve

Remote server

Same as above — ensure OLLAMA_HOST=0.0.0.0 is set on the server and the firewall allows port 11434.


5. Verifying the Connection

sequenceDiagram
    participant User
    participant WebUI as Open WebUI
    participant Ollama

    User->>WebUI: Open http://localhost:8080
    WebUI->>Ollama: GET /api/tags
    Ollama-->>WebUI: List of models
    WebUI-->>User: Model selector populated
    User->>WebUI: Send prompt
    WebUI->>Ollama: POST /api/chat
    Ollama-->>WebUI: Streamed response
    WebUI-->>User: Display response
  1. Open http://localhost:8080.
  2. Check the model dropdown — your pulled models should appear.
  3. If empty, verify Settings → Connections shows a green status for Ollama.
  4. Test from terminal:
curl http://localhost:11434/api/tags

Troubleshooting

Problem Cause Fix
connection refused on WSL2 Ollama bound to 127.0.0.1 Set OLLAMA_HOST=0.0.0.0, restart Ollama
Env var not taking effect Ollama not fully restarted Get-Process ollama* | Stop-Process -Force, relaunch
Firewall blocking Port 11434 not allowed Add inbound rule for TCP 11434
No models in dropdown Models not pulled ollama pull llama3.1
externally-managed-environment System Python restriction Use python3 -m venv or uvx
Open WebUI slow CPU inference Install GPU drivers (CUDA/ROCm)

Quick Reference

graph TD
    subgraph Ollama
        A[Install] --> B[Pull Model]
        B --> C[Serve on 0.0.0.0:11434]
    end

    subgraph Open WebUI
        D[pip install open-webui] --> E[open-webui serve]
        E --> F[http://localhost:8080]
    end

    C -->|API| F
Command Description
ollama pull <model> Download a model
ollama list List installed models
ollama run <model> Chat in terminal
ollama serve Start server manually
ollama rm <model> Remove a model
open-webui serve Start Open WebUI
← Welcome to Toft.gg
NordCP - My Personal project for automated Hosting →
Back to Blog