Contents

Building An AI Playground With Ollama And Open WebUI: A Hands-On Introduction For Beginners

Large Language Models (LLMs) have been making waves in the field of artificial intelligence (AI) for quite some time, and their popularity continues to soar. These advanced models have the remarkable ability to understand, generate, and respond to human language with unprecedented accuracy and depth. With this surge in interest comes the rise of open source solutions that enable individuals and organizations to host LLMs locally.

In this blog post we will explore how to turn your existing local computer/server into a simple ai server.

OLLama and Open WebUI

OLLama is an open-source Large Language Model (LLM) framework designed to be run locally on personal computers or servers. It was created by the research team at Meta AI targeting researchers, students, and developers who might not have access to large cloud resources or want to maintain their data privacy. It provides simple interface for managing, exploring and testing LLMs locally.

Open WebUI is a web interface that works well with OLLama and openAI compatible apis. We will install OLLama and Open WebUI using docker compose and give a short introduction to Open WebUI interface.

Prerequisites

Before getting started, ensure your system meets the following requirements:

  1. Docker and Docker Compose installed and configured
  2. Necessary GPU drivers (for GPU mode) - tested only on NVIDIA GPUs
  3. cuda and nvidia-container-toolkit installed
Note
For AMD GPUs, consult other resources as this guide is specifically tested with NVIDIA GPUs.

Installing Ollama and Open WebUI

To install OLLama and Open WebUI on your workstation, create following docker-compose.yaml file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
services:

  ollama:
    image: ollama/ollama
    restart: always
    networks:
      - app-network
    ports:
      - "11434:11434"
    volumes:
      - ollama:/root/.ollama
    # following section is if you are passing gpu to the container
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    restart: always
    ports:
      - "3000:8080"
    networks:
      - app-network
    volumes:
      - open-webui:/app/backend/data
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434

volumes:
  ollama:
    driver: local
  open-webui:
    driver: local

networks:
  app-network:
    driver: bridge

Execute docker-compose up -d.

You should be able to access Open WebUI interface on localhost:3000. Services will be activated on reboot.

Sign up

On first accessing Open WebUI you will have to sign up. After you enter your email and credentials you will have admin access. New users have to be added by admin user.

/building-an-ai-playground-with-ollama-and-open-webui/sign_up.png
Fig 1. Sign UP

Pulling models locally

To get a new model from Ollama.com:

  1. Go to Settings > Models.
  2. Under “Pull a model from Ollama.com” enter name:version and hit download button (e.g., “mistral:7b”)
  3. The system will update with the downloaded model.

/building-an-ai-playground-with-ollama-and-open-webui/pulling_models_1.png
Fig 2. Home Screen

/building-an-ai-playground-with-ollama-and-open-webui/pulling_models_2.png
Fig 3. Downloading Model

To get information about available models, click a link below.

/building-an-ai-playground-with-ollama-and-open-webui/pulling_models_3.png
Fig 4. Model Resources

Chatting

To start chatting select the model and start writing in the chat below. Response speed depends on your system performance and if gpu is passed to OLLama docker container.

/building-an-ai-playground-with-ollama-and-open-webui/chatting.png
Fig 5. Start Chatting

Creating custom models

You have ability to create custom models by adding instructions and restrictions to existing models in a form of model files. To create custom model, go to Modelfiles -> Create a Modelfile.

/building-an-ai-playground-with-ollama-and-open-webui/creating_custom_models_1.png
Fig 6. Navigate to Modelfiles

Creating modelfiles is similar to creating docker files. You select existing model which you want to re-purpose and add SYSTEM prompt with instructions.

/building-an-ai-playground-with-ollama-and-open-webui/creating_custom_models_2.png
Fig 7. Create a Modelfile

Conclusion

In this blog post we have installed OLLama and Open webUI using docker and docker compose on your local machine. We have seen how to manage models and create custom ones.

Happy engineering!