Self-Hosted Supabase

Deploy vCON MCP with a self-hosted Supabase instance for complete control over your infrastructure.

Table of Contents


Architecture Overview

The vCON MCP stack includes:

Service
Description

vcon-mcp

Main application server (HTTP MCP transport)

ofelia

Job scheduler for embedding vCONs

vcon-supabase-db

PostgreSQL database

vcon-supabase-kong

API Gateway

vcon-supabase-auth

GoTrue authentication server

vcon-supabase-rest

PostgREST database REST API

vcon-supabase-storage

File storage API


Prerequisites

  • Docker Engine 20.10+ and Docker Compose V2

  • At least 4GB RAM available for Docker

  • Ports available: 3002 (MCP API) or configurable via Docker Compose port mapping

  • Embedding API access (one of the following):

    • Azure OpenAI API

    • OpenAI API

    • Hugging Face API


1. Create Directory Structure


2. Configure Environment Variables

Create .env file with your configuration:

Important: For production, generate new secrets:


3. Create Supabase Database Files

3.1 Create volumes/db/roles.sql

3.2 Create volumes/db/jwt.sql

3.3 Create volumes/db/_supabase.sql

3.4 Create placeholder SQL files


4. Create Kong API Gateway Configuration

Create volumes/api/kong.yml:


5. Create Docker Compose File

Create docker-compose.yml:


6. Start the Stack

Watch the startup logs:

Wait until all services are healthy (typically 1-2 minutes for first startup).


7. Verify Deployment

7.1 Check Service Status

All services should show as "healthy" or "running".

7.2 Test Health Endpoint

Expected response:

7.3 View Logs


8. Pushing vCONs to vCON MCP

vCON MCP exposes an HTTP API for ingesting vCONs. External systems can push vCONs to the endpoint.

8.1 API Endpoint

Replace <docker-host> with:

  • localhost if calling from the same machine

  • The machine's IP address or hostname if calling from another system

8.2 Example: Push a vCON

8.3 Webhook Integration

If your vCON ingestion system supports webhooks, configure it to POST vCONs to:

Conserver Integration

For real-time vCON ingestion from Conserverarrow-up-right, use the links.webhook module to push vCONs directly to vCON MCP.

Example Conserver Configuration:

Configuration Notes:

  • The push_to_vcon_mcp link uses links.webhook to POST vCONs to the MCP endpoint

  • Place the webhook link at the end of the chain to push fully processed vCONs (with transcription, diarization, summaries, etc.)


9. Configuration Reference

9.1 Environment Variables

Variable
Description
Example

SUPABASE_URL

Internal URL to Kong gateway

http://vcon-supabase-kong:8000

SUPABASE_SERVICE_ROLE_KEY

Service role JWT for admin access

Generated JWT

SUPABASE_ANON_KEY

Anonymous user JWT

Generated JWT

MCP_ENABLED_CATEGORIES

Enabled tool categories

read (for production)

VCON_API_KEYS

API keys for authentication

Comma-separated keys

9.2 Embedding Provider Configuration

Choose one of the following embedding providers:

Azure OpenAI

OpenAI

Hugging Face

9.3 MCP Tool Categories

Category
Description
Production Use

read

Query vCONs, search, analytics

✅ Recommended

write

Create/update vCONs

Use with caution

schema

Modify database schema

Not recommended

analytics

Heavy analytics operations

Use with caution

infra

Infrastructure management

Not recommended

9.4 Ofelia Job Scheduler

The embedding job runs every 10 minutes to generate embeddings for new vCONs:


10. Operations

10.1 Common Commands

10.2 Database Backup

10.3 Updating Images

10.4 Viewing Database


11. Troubleshooting

11.1 Services Won't Start

Check port conflicts:

Check Docker resources:

11.2 Database Connection Issues

Check database is healthy:

Reset database (WARNING: destroys all data):

11.3 Kong Gateway Issues

Check Kong logs:

Verify Kong config:

11.4 vCON MCP Health Check Fails

Check application logs:

Verify Supabase connection:

11.5 Embedding Job Not Running

Check Ofelia logs:

Run embedding manually:


Next Steps

Last updated