Run AI Locally6 of 16 steps (38%)

Set Up Open WebUI with Ollama: Your Own ChatGPT

Open WebUI is a self-hosted ChatGPT alternative. Pair it with Ollama for a fully local, private AI chat.

Prerequisites

  • Ollama installed and running (ollama run llama3.2 at least once)
  • Docker (or use the pip install)

Docker install (recommended)

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main

Open http://localhost:3000. Create an admin account.

Connect Ollama

In Open WebUI settings, add Ollama as a connection. URL: http://host.docker.internal:11434 (or http://localhost:11434 if not using Docker). Save. You'll see your local models in the model selector.

Start chatting

Select a model (e.g., Llama 3.2), type your prompt. Everything runs locally. No data leaves your machine.

Optional: Add OpenAI for hybrid use

You can add your OpenAI API key too. Use local models for sensitive tasks, cloud for heavy lifting. Open WebUI lets you switch per conversation.

In the next step, you will explore the best AI tools for Self-hosted AI chat interface. Browse the options, pick one that fits your workflow, and try it before continuing.

Discussion

  • Loading…

← Back to course