Easily Install Ollama + OpenWebUI + PostgreSQL (pgvector)

image

The Complete Guide to Your Local AI Stack [Docker Edition]

Ready to install Ollama with OpenWebUI and deploy a powerful self-hosted AI assistant? You’re in the right place. This Docker AI stack setup includes everything you need to run LLM locally with pgvector—fast, private, and fully under your control.

Everything has changed. Artificial intelligence no longer requires massive server farms, expensive APIs, or security compromises. Now you can own a full-scale AI infrastructure that runs locally, under your control, on your own terms.

Welcome to a new era—where anyone can become the architect of their own digital mind. You’re not just getting tools, you’re building an ecosystem: the powerful Ollama LLM engine, the elegant OpenWebUI interface, and the robust PostgreSQL + pgvector vector database.

Why rely on the cloud when you can build your own? With this stack, you can run advanced language models (LLaMA3, Mistral, Codellama), integrate documents, configure private RAG pipelines, and deploy intelligent agents—all within an autonomous system built to scale.

This guide is your ticket to the world of modern local AI. We’ll walk you through each step to assemble this stack with Docker, deploy it in 10 minutes, and start using it right away. Your intelligent system starts here. We’ll walk you through each step to assemble this stack with Docker, deploy it in 10 minutes, and start using it right away. Your intelligent system starts here.

image

Why This Stack?

When it comes to AI, every detail matters: performance, privacy, scalability, and control. This stack was chosen for its perfect balance between flexibility, simplicity, and power.

  • 🧠 Ollama — The engine of your AI system. It runs LLM models directly on your machine, with no external API calls. You control the model, the data, and the logic.
  • 💬 OpenWebUI — A user-friendly web interface that lets you interact with the model in chat format, upload documents, manage sessions, and inject context. With built-in authentication, you can limit access and ensure privacy.
  • 🧩 PostgreSQL + pgvector — A vector database that enables semantic search and RAG (Retrieval-Augmented Generation). Upload PDFs, search by meaning, and get context-aware responses.
  • Docker — A unified deployment environment where everything runs isolated and reproducibly. One configuration file, and you’re ready to go on any server or local machine.
  • 🔐 Privacy & Local Control — All data stays with you. No clouds, no leaks, no third parties. Your system. Your intelligence. Your rules.

Preparation

Before deploying your AI stack, make sure your system is ready for containerized applications. Follow these steps to prepare the environment and install everything you’ll need for Ollama, OpenWebUI, and PostgreSQL.

  1. Update your system and install dependencies: Ensure you have the latest versions of Docker and Docker Compose:

sudo apt update && sudo apt install -y docker.io docker-compose curl

  1. Enable and start Docker: So Docker starts automatically with your system:

sudo systemctl enable docker sudo systemctl start docker

  1. Verify the installation: Make sure everything is installed correctly:

docker –version docker-compose –version

Your system is now ready for the next step—creating the configuration to launch all components using Docker.

Docker Compose

Now let’s create the main configuration file that will launch all three components—Ollama, PostgreSQL + pgvector, and OpenWebUI—inside isolated containers with a single command.

  1. Create the working directory: This is where all configs and data will be stored:

mkdir ollama-webui-stack && cd ollama-webui-stack

  1. Create the Docker Compose config file: This file will define the deployment parameters:

nano docker-compose.yml

  1. Paste the configuration: Add the following content into the file:

version: ‘3.8’

services: ollama: image: ollama/ollama container_name: ollama volumes: – ollama_data:/root/.ollama ports: – “11434:11434” restart: unless-stopped

postgres: image: ankane/pgvector container_name: postgres restart: unless-stopped environment: POSTGRES_DB: openwebui POSTGRES_USER: revold POSTGRES_PASSWORD: Apollo7337!+ ports: – “5432:5432” volumes: – pgdata:/var/lib/postgresql/data

openwebui: image: ghcr.io/open-webui/open-webui:main container_name: openwebui restart: unless-stopped ports: – “8080:8080” environment: – OLLAMA_BASE_URL=http://ollama:11434 – WEBUI_AUTH=true – DATABASE_URL=postgresql://revold:Apollo7337!+@postgres:5432/openwebui volumes: – openwebui_data:/app/backend/data depends_on: – ollama – postgres

volumes: ollama_data: pgdata: openwebui_data:

Launch

With the configuration ready, it’s time to launch everything. Thanks to Docker Compose, you can start the entire stack with one command.

  1. Start the containers:

docker-compose up -d

  1. Check that everything is running:

docker ps

  1. Access the interfaces:

Load a Model

Once the containers are up and running, the next step is to load a language model so you can begin interacting with AI through OpenWebUI.

  1. Open a terminal and run the following command inside the Ollama container:

docker exec -it ollama ollama pull llama3

This will download and install the LLaMA3 model into your local Ollama environment.

  1. You can also load other models, depending on your needs:
  • mistral
  • codellama
  • gemma

Models can be selected and switched inside OpenWebUI or via Ollama’s API. All models operate locally—no data leaves your machine.

You’re now ready to start using your intelligent assistant.

🧠 Usage

With everything running and models installed, you can now interact with your system. OpenWebUI provides an intuitive interface for working with the model and your knowledge base.

  1. Log in to OpenWebUI: Go to http://localhost:8080, create an account, and log in.
  2. Upload documents: In the “Knowledge” section, you can upload files (PDF, TXT, etc.). They’ll be automatically indexed into PostgreSQL + pgvector for semantic search.
  3. Chat with context: After uploading documents, you can ask questions. The AI will reference those materials to deliver precise, informed answers.
  4. Manage sessions and models: The interface allows you to switch models, review history, and manage multiple sessions simultaneously.

Your system is now fully capable of tackling complex tasks like text generation, data analysis, research, and knowledge management.

Extensions

The current stack is powerful and self-contained—but it can be taken even further with some powerful integrations:

  1. Nginx + SSL (via Cloudflare): Secure your instance with HTTPS and automatic certificate renewals. Great for exposing your system publicly with confidence.
  2. LangChain Agents: Build toolchains, memory, and logical flows with LangChain. Enable multi-step reasoning, web search, and external API access from your AI.
  3. FastAPI + pgvector backend: Create your own custom backend that interacts with the vector database. Ideal for building tailored assistants or connecting external systems.
  4. LangGraph / Multi-model Routing: Route different requests to different models or logic flows. Perfect for hybrid systems combining classification, summarization, and creativity.

These upgrades will help scale your AI, optimize performance, and tailor the stack to your exact needs.

REVOLD AI — Leading the Future of AI Architecture

REVOLD AI isn’t just a company—it’s a driving force behind the next wave of local AI systems. We build tools that empower developers, streamline businesses, and give users full control over their data and intelligence.

Our solutions are already being used in:

  • autonomous AI assistants
  • legal document analysis and automation
  • intelligent knowledge management
  • predictive analytics and scenario modeling

We blend the best of LLMs, vector databases, containerization, and security to deliver scalable architectures for real-world use.

If you’re ready to not just use AI, but build the future with it—you’re already one of us. REVOLD AI is the intelligence infrastructure of tomorrow.