Docker Compose (Single Server)
The fastest way to deploy Contract Lucidity. Suitable for demos, development, and small teams (up to ~50 concurrent users).
Prerequisites
- A Linux server (Ubuntu 22.04+ recommended) with at least 4 GB RAM and 2 vCPUs
- Docker Engine 24+ and Docker Compose v2
- Git
- A domain name pointing to the server (for production use with SSL)
Step 1: Clone the Repository
git clone https://github.com/your-org/contract-lucidity.git
cd contract-lucidity
Step 2: Create the Environment File
cp .env.example .env
Edit .env with your configuration:
# ─── Database ───
POSTGRES_USER=cl_user
POSTGRES_PASSWORD=YOUR_STRONG_PASSWORD_HERE
POSTGRES_DB=contract_lucidity
POSTGRES_HOST=cl-postgres
POSTGRES_PORT=5432
# ─── Redis ───
REDIS_HOST=cl-redis
REDIS_PORT=6379
REDIS_URL=redis://cl-redis:6379/0
CELERY_BROKER_URL=redis://cl-redis:6379/0
CELERY_RESULT_BACKEND=redis://cl-redis:6379/1
# ─── JWT ───
JWT_SECRET_KEY=GENERATE_A_RANDOM_48_CHAR_STRING
JWT_ALGORITHM=HS256
JWT_ACCESS_TOKEN_EXPIRE_MINUTES=60
JWT_REFRESH_TOKEN_EXPIRE_DAYS=7
# ─── Application ───
APP_NAME=Contract Lucidity
APP_ENV=production
LOG_LEVEL=INFO
MAX_UPLOAD_SIZE_MB=100
# ─── Storage ───
STORAGE_PATH=/data/storage
CONFIG_PATH=/data/config
# ─── URLs ───
CORS_ORIGINS=https://your-domain.com
FRONTEND_URL=https://your-domain.com
BACKEND_INTERNAL_URL=http://cl-backend:8000
# ─── Default Admin ───
DEFAULT_ADMIN_EMAIL=[email protected]
DEFAULT_ADMIN_PASSWORD=<your-strong-password>
Generate JWT_SECRET_KEY and POSTGRES_PASSWORD with:
openssl rand -hex 32
Never use the default values in production.
Step 3: Review the Docker Compose File
The docker-compose.yml defines all five services:
version: "3.9"
services:
cl-postgres:
image: pgvector/pgvector:pg16
container_name: cl-postgres
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USER:-cl_user}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-<your-strong-password>}
POSTGRES_DB: ${POSTGRES_DB:-contract_lucidity}
ports:
- "5432:5432"
volumes:
- cl-pgdata:/var/lib/postgresql/data
- ./database/init.sql:/docker-entrypoint-initdb.d/01-init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-cl_user} -d ${POSTGRES_DB:-contract_lucidity}"]
interval: 5s
timeout: 5s
retries: 5
networks:
- cl-network
cl-redis:
image: redis:7-alpine
container_name: cl-redis
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- cl-redisdata:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
networks:
- cl-network
cl-backend:
build:
context: ./backend
dockerfile: Dockerfile
container_name: cl-backend
restart: unless-stopped
env_file:
- .env
expose:
- "8000"
volumes:
- ./backend:/app
- cl-storage:/data/storage
- cl-config:/data/config
depends_on:
cl-postgres:
condition: service_healthy
cl-redis:
condition: service_healthy
networks:
- cl-network
cl-worker:
build:
context: ./backend
dockerfile: Dockerfile
container_name: cl-worker
restart: unless-stopped
env_file:
- .env
command: celery -A app.celery_app worker --loglevel=info --concurrency=2
volumes:
- ./backend:/app
- cl-storage:/data/storage
- cl-config:/data/config
depends_on:
cl-postgres:
condition: service_healthy
cl-redis:
condition: service_healthy
networks:
- cl-network
cl-frontend:
build:
context: ./frontend
dockerfile: Dockerfile
container_name: cl-frontend
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- ./frontend/src:/app/src
- ./frontend/public:/app/public
environment:
- BACKEND_INTERNAL_URL=${BACKEND_INTERNAL_URL:-http://cl-backend:8000}
- NEXT_PUBLIC_FRONTEND_URL=${FRONTEND_URL:-http://localhost:3000}
depends_on:
- cl-backend
networks:
- cl-network
volumes:
cl-pgdata:
cl-redisdata:
cl-storage:
cl-config:
networks:
cl-network:
driver: bridge
Key points:
- cl-postgres uses the
pgvector/pgvector:pg16image (PostgreSQL 16 with pgvector pre-installed) - cl-backend and cl-worker share the same codebase (
./backend) and the samecl-storagevolume - cl-worker overrides the default command to run Celery instead of uvicorn
- cl-frontend connects to the backend via the internal Docker network (
http://cl-backend:8000) - The
init.sqlscript is mounted to auto-enable thevectorextension on first boot
Step 4: Build and Start
docker compose up -d --build
This will:
- Build the backend image (Python 3.12, Tesseract OCR, pip dependencies)
- Build the frontend image (Node 20, npm install)
- Pull
pgvector/pgvector:pg16andredis:7-alpine - Start all services in dependency order (Postgres and Redis first, then backend, worker, frontend)
Monitor the startup:
docker compose logs -f
Wait until you see:
cl-postgres--database system is ready to accept connectionscl-redis--Ready to accept connectionscl-backend--Application startup completecl-worker--celery@... readycl-frontend--Ready in ...
Step 5: Verify the Deployment
# Check all containers are running
docker compose ps
# Test the backend health endpoint
curl http://localhost:8000/api/health
# Test the frontend
curl -I http://localhost:3000
Expected output from the health check:
{"status": "healthy"}
Step 6: Set Up a Reverse Proxy (Production)
For production, place Nginx or Caddy in front of the frontend to handle SSL termination.
Option A: Caddy (Recommended -- automatic SSL)
your-domain.com {
reverse_proxy localhost:3000
}
Option B: Nginx with Certbot
server {
listen 80;
server_name your-domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name your-domain.com;
ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;
client_max_body_size 100M;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Then obtain the certificate:
sudo certbot --nginx -d your-domain.com
Step 7: Access the Application
- Open
https://your-domain.comin a browser - Log in with the admin credentials from your
.envfile - Complete the setup wizard (configure your AI provider)
- Upload a test document to verify the pipeline works end-to-end
Common Operations
Updating
cd contract-lucidity
git pull origin master
docker compose up -d --build
Viewing Logs
# All services
docker compose logs -f
# Specific service
docker compose logs -f cl-worker
Adjusting Worker Concurrency
Edit the command in docker-compose.yml or set the environment variable:
# In docker-compose.yml, change --concurrency=2 to your desired value
# Then restart
docker compose restart cl-worker
Restarting a Single Service
docker compose restart cl-frontend
Database Backup
docker exec cl-postgres pg_dump -U cl_user contract_lucidity > backup_$(date +%Y%m%d).sql
Database Restore
cat backup_20260319.sql | docker exec -i cl-postgres psql -U cl_user contract_lucidity
Common Issues
Port Conflicts
If port 5432 or 6379 is already in use, change the host port mapping in docker-compose.yml:
ports:
- "5433:5432" # Map to 5433 on host instead
Only the host port (left side) needs to change. The internal container port stays the same. Service-to-service communication uses the Docker network and is unaffected.
Worker Cannot Find Uploaded Files
This means the cl-storage volume is not shared correctly. Verify:
# Check volume exists
docker volume ls | grep cl-storage
# Verify both containers see the same data
docker exec cl-backend ls /data/storage
docker exec cl-worker ls /data/storage
Frontend Shows "Failed to Fetch"
The frontend cannot reach the backend. Check:
BACKEND_INTERNAL_URLis set tohttp://cl-backend:8000- Both containers are on the same Docker network:
docker network inspect contract-lucidity_cl-network
Out of Disk Space
See Document Storage for capacity planning. For a quick fix:
# Check disk usage
docker system df
df -h
# Clean unused Docker resources
docker system prune -a --volumes
docker system prune --volumes will delete all unused volumes, including database data. Only use this on a fresh install or after backing up.
HMR / Hot Reload Issues
If the frontend shows stale content after code changes:
docker compose restart cl-frontend
This is a known Docker volume mount caching issue with Next.js HMR.