Skip to content

Run all your local AI together in one package - Ollama, Supabase, n8n, Open WebUI, and more!

License

Notifications You must be signed in to change notification settings

coleam00/local-ai-packaged

Repository files navigation

Self-hosted AI Package

Self-hosted AI Package is an open, docker compose template that quickly bootstraps a fully featured Local AI and Low Code development environment including Ollama for your local LLMs, Open WebUI for an interface to chat with your N8N agents, and Supabase for your database, vector store, and authentication.

This is Cole's version with a couple of improvements and the addition of Supabase, Open WebUI, and Flowise! Postgres was also removed since Supabase runs Postgres under the hood. Also, the local RAG AI Agent workflow from the video will be automatically in your n8n instance if you use this setup instead of the base one provided by n8n!

Original Local AI Starter Kit by the n8n team

Download my N8N + OpenWebUI integration directly on the Open WebUI site. (more instructions below)

n8n.io - Screenshot

Curated by https://github.com/n8n-io and https://github.com/coleam00, it combines the self-hosted n8n platform with a curated list of compatible AI products and components to quickly get started with building self-hosted AI workflows.

What’s included

Self-hosted n8n - Low-code platform with over 400 integrations and advanced AI components

Supabase - Open source database as a service - most widely used database for AI agents

Ollama - Cross-platform LLM platform to install and run the latest local LLMs

Open WebUI - ChatGPT-like interface to privately interact with your local models and N8N agents

Flowise - No/low code AI agent builder that pairs very well with n8n

Qdrant - Open-source, high performance vector store with an comprehensive API. Even though you can use Supabase for RAG, this was kept unlike Postgres since it's faster than Supabase so sometimes is the better option.

Prerequisites

Before you begin, make sure you have the following software installed:

Installation

Clone the repository and navigate to the project directory:

git clone https://github.com/coleam00/local-ai-packaged.git
cd local-ai-packaged

Before running the services, you need to set up your environment variables for Supabase following their self-hosting guide.

  1. Make a copy of .env.example and rename it to .env in the root directory of the project

  2. Set the following required environment variables:

    ############
    # N8N Configuration
    ############
    N8N_ENCRYPTION_KEY=
    N8N_USER_MANAGEMENT_JWT_SECRET=
    
    ############
    # Supabase Secrets
    ############
    POSTGRES_PASSWORD=
    JWT_SECRET=
    ANON_KEY=
    SERVICE_ROLE_KEY=
    DASHBOARD_USERNAME=
    DASHBOARD_PASSWORD=
    
    ############
    # Supavisor -- Database pooler
    ############
    POOLER_TENANT_ID=

    [!IMPORTANT] Make sure to generate secure random values for all secrets. Never use the example values in production.


The project includes a start_services.py script that handles starting both the Supabase and local AI services. The script accepts a --profile flag to specify which GPU configuration to use.

For Nvidia GPU users

python start_services.py --profile gpu-nvidia

Note

If you have not used your Nvidia GPU with Docker before, please follow the Ollama Docker instructions.

For AMD GPU users on Linux

python start_services.py --profile gpu-amd

For Mac / Apple Silicon users

If you're using a Mac with an M1 or newer processor, you can't expose your GPU to the Docker instance, unfortunately. There are two options in this case:

  1. Run the starter kit fully on CPU:

    python start_services.py --profile cpu
  2. Run Ollama on your Mac for faster inference, and connect to that from the n8n instance:

    python start_services.py --profile none

    If you want to run Ollama on your mac, check the Ollama homepage for installation instructions.

For Mac users running OLLAMA locally

If you're running OLLAMA locally on your Mac (not in Docker), you need to modify the OLLAMA_HOST environment variable in the n8n service configuration. Update the x-n8n section in your Docker Compose file as follows:

x-n8n: &service-n8n
  # ... other configurations ...
  environment:
    # ... other environment variables ...
    - OLLAMA_HOST=host.docker.internal:11434

Additionally, after you see "Editor is now accessible via: http://localhost:5678/":

  1. Head to http://localhost:5678/home/credentials
  2. Click on "Local Ollama service"
  3. Change the base URL to "http://host.docker.internal:11434/"

For everyone else

python start_services.py --profile cpu

⚡️ Quick start and usage

The main component of the self-hosted AI starter kit is a docker compose file pre-configured with network and disk so there isn’t much else you need to install. After completing the installation steps above, follow the steps below to get started.

  1. Open http://localhost:5678/ in your browser to set up n8n. You’ll only have to do this once. You are NOT creating an account with n8n in the setup here, it is only a local account for your instance!

  2. Open the included workflow: http://localhost:5678/workflow/vTN9y2dLXqTiDfPT

  3. Create credentials for every service:

    Ollama URL: http://ollama:11434

    Postgres (through Supabase): use DB, username, and password from .env. IMPORTANT: Host is 'db' Since that is the name of the service running Supabase

    Qdrant URL: http://qdrant:6333 (API key can be whatever since this is running locally)

    Google Drive: Follow this guide from n8n. Don't use localhost for the redirect URI, just use another domain you have, it will still work! Alternatively, you can set up local file triggers.

  4. Select Test workflow to start running the workflow.

  5. If this is the first time you’re running the workflow, you may need to wait until Ollama finishes downloading Llama3.1. You can inspect the docker console logs to check on the progress.

  6. Make sure to toggle the workflow as active and copy the "Production" webhook URL!

  7. Open http://localhost:3000/ in your browser to set up Open WebUI. You’ll only have to do this once. You are NOT creating an account with Open WebUI in the setup here, it is only a local account for your instance!

  8. Go to Workspace -> Functions -> Add Function -> Give name + description then paste in the code from n8n_pipe.py

    The function is also published here on Open WebUI's site.

  9. Click on the gear icon and set the n8n_url to the production URL for the webhook you copied in a previous step.

  10. Toggle the function on and now it will be available in your model dropdown in the top left!

To open n8n at any time, visit http://localhost:5678/ in your browser. To open Open WebUI at any time, visit http://localhost:3000/.

With your n8n instance, you’ll have access to over 400 integrations and a suite of basic and advanced AI nodes such as AI Agent, Text classifier, and Information Extractor nodes. To keep everything local, just remember to use the Ollama node for your language model and Qdrant as your vector store.

Note

This starter kit is designed to help you get started with self-hosted AI workflows. While it’s not fully optimized for production environments, it combines robust components that work well together for proof-of-concept projects. You can customize it to meet your specific needs

Upgrading

To update all containers to their latest versions (n8n, Open WebUI, etc.), run these commands:

# Stop all services
docker compose -p localai -f docker-compose.yml -f supabase/docker/docker-compose.yml down

# Pull latest versions of all containers
docker compose -p localai -f docker-compose.yml -f supabase/docker/docker-compose.yml pull

# Start services again with your desired profile
python start_services.py --profile <your-profile>

Replace <your-profile> with one of: cpu, gpu-nvidia, gpu-amd, or none.

Note: The start_services.py script itself does not update containers - it only restarts them or pulls them if you are downloading these containers for the first time. To get the latest versions, you must explicitly run the commands above.

Troubleshooting

Here are solutions to common issues you might encounter:

Supabase Issues

  • Supabase Pooler Restarting: If the supabase-pooler container keeps restarting itself, follow the instructions in this GitHub issue.

  • Supabase Analytics Startup Failure: If the supabase-analytics container fails to start after changing your Postgres password, delete the folder supabase/docker/volumes/db/data.

  • If using Docker Desktop: Go into the Docker settings and make sure "Expose daemon on tcp://localhost:2375 without TLS" is turned on

GPU Support Issues

  • Windows GPU Support: If you're having trouble running Ollama with GPU support on Windows with Docker Desktop:

    1. Open Docker Desktop settings
    2. Enable WSL 2 backend
    3. See the Docker GPU documentation for more details
  • Linux GPU Support: If you're having trouble running Ollama with GPU support on Linux, follow the Ollama Docker instructions.

👓 Recommended reading

n8n is full of useful content for getting started quickly with its AI concepts and nodes. If you run into an issue, go to support.

🎥 Video walkthrough

🛍️ More AI templates

For more AI workflow ideas, visit the official n8n AI template gallery. From each workflow, select the Use workflow button to automatically import the workflow into your local n8n instance.

Learn AI key concepts

Local AI templates

Tips & tricks

Accessing local files

The self-hosted AI starter kit will create a shared folder (by default, located in the same directory) which is mounted to the n8n container and allows n8n to access files on disk. This folder within the n8n container is located at /data/shared -- this is the path you’ll need to use in nodes that interact with the local filesystem.

Nodes that interact with the local filesystem

📜 License

This project (originally created by the n8n team, link at the top of the README) is licensed under the Apache License 2.0 - see the LICENSE file for details.

About

Run all your local AI together in one package - Ollama, Supabase, n8n, Open WebUI, and more!

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages