Skip to main content

Google Cloud Platform (GCP)

This guide deploys Contract Lucidity on GCP using Cloud Run for serverless container hosting, Cloud SQL for PostgreSQL, and Memorystore for Redis. GCP typically offers the lowest cost among the three major cloud providers for this workload profile.

Architecture

Prerequisites

  • GCP account with a billing-enabled project
  • Google Cloud CLI (gcloud) installed and configured
  • Docker installed locally
  • A registered domain name
# Set project
gcloud config set project <project-id>

# Enable required APIs
gcloud services enable \
run.googleapis.com \
sqladmin.googleapis.com \
redis.googleapis.com \
artifactregistry.googleapis.com \
secretmanager.googleapis.com \
vpcaccess.googleapis.com \
compute.googleapis.com

Step 1: VPC Network

# Create VPC
gcloud compute networks create cl-vpc --subnet-mode=custom

# Create subnet for Cloud Run and databases
gcloud compute networks subnets create cl-subnet \
--network=cl-vpc \
--region=us-central1 \
--range=10.0.0.0/24

# Create VPC connector for Cloud Run -> VPC resources
gcloud compute networks vpc-access connectors create cl-connector \
--region=us-central1 \
--network=cl-vpc \
--range=10.0.1.0/28

Step 2: Cloud SQL PostgreSQL with pgvector

# Create Cloud SQL instance
gcloud sql instances create cl-postgres \
--database-version=POSTGRES_16 \
--tier=db-custom-2-4096 \
--region=us-central1 \
--network=cl-vpc \
--no-assign-ip \
--storage-type=SSD \
--storage-size=50GB \
--storage-auto-increase \
--availability-type=regional \
--backup-start-time=02:00 \
--enable-point-in-time-recovery \
--database-flags=cloudsql.enable_pgvector=on

# Set the root password
gcloud sql users set-password postgres \
--instance=cl-postgres \
--password='<strong-password>'

# Create application user
gcloud sql users create cl_user \
--instance=cl-postgres \
--password='<strong-password>'

# Create database
gcloud sql databases create contract_lucidity \
--instance=cl-postgres

Enable pgvector:

# Connect through Cloud SQL Proxy or gcloud sql connect
gcloud sql connect cl-postgres --user=cl_user --database=contract_lucidity

# Inside psql
CREATE EXTENSION IF NOT EXISTS vector;
pgvector on Cloud SQL

Cloud SQL for PostgreSQL supports pgvector natively. Enable it via the cloudsql.enable_pgvector=on database flag. Cloud SQL also integrates with Vertex AI for generating embeddings directly within SQL queries.

Step 3: Memorystore for Redis

gcloud redis instances create cl-redis \
--region=us-central1 \
--tier=standard \
--size=1 \
--redis-version=redis_7_0 \
--network=cl-vpc \
--transit-encryption-mode=SERVER_AUTHENTICATION

# Get the host IP
gcloud redis instances describe cl-redis --region=us-central1 \
--format='value(host)'

Step 4: Cloud Storage for Documents

# Create bucket
gcloud storage buckets create gs://cl-documents-<project-id> \
--location=us-central1 \
--default-storage-class=STANDARD \
--uniform-bucket-level-access

# Set lifecycle rule to move old documents to Nearline after 90 days
cat > lifecycle.json << 'EOF'
{
"rule": [
{
"action": {"type": "SetStorageClass", "storageClass": "NEARLINE"},
"condition": {"age": 90}
}
]
}
EOF

gcloud storage buckets update gs://cl-documents-<project-id> \
--lifecycle-file=lifecycle.json
Cloud Storage vs Filestore

Cloud Run supports mounting Cloud Storage buckets as FUSE volumes (via GCS FUSE). This is the simplest option and works well for Contract Lucidity's write-once-read-many pattern. For workloads requiring true POSIX semantics, use Filestore (NFS) instead. See Document Storage for a comparison.

Step 5: Artifact Registry

# Create repository
gcloud artifacts repositories create cl-images \
--repository-format=docker \
--location=us-central1

# Configure Docker authentication
gcloud auth configure-docker us-central1-docker.pkg.dev

# Build and push images
cd contract-lucidity

# Backend
docker build -t us-central1-docker.pkg.dev/<project-id>/cl-images/cl-backend:latest \
./backend -f ./backend/Dockerfile
docker push us-central1-docker.pkg.dev/<project-id>/cl-images/cl-backend:latest

# Worker
docker build -t us-central1-docker.pkg.dev/<project-id>/cl-images/cl-worker:latest \
./backend -f ./backend/Dockerfile.worker
docker push us-central1-docker.pkg.dev/<project-id>/cl-images/cl-worker:latest

# Frontend
docker build -t us-central1-docker.pkg.dev/<project-id>/cl-images/cl-frontend:latest \
./frontend -f ./frontend/Dockerfile
docker push us-central1-docker.pkg.dev/<project-id>/cl-images/cl-frontend:latest

Step 6: Secret Manager

# Create secrets
echo -n '<postgres-password>' | gcloud secrets create postgres-password --data-file=-
echo -n '<jwt-secret>' | gcloud secrets create jwt-secret --data-file=-
echo -n '<admin-password>' | gcloud secrets create admin-password --data-file=-
echo -n '<redis-auth-string>' | gcloud secrets create redis-auth --data-file=-

Grant the Cloud Run service account access:

PROJECT_NUMBER=$(gcloud projects describe <project-id> --format='value(projectNumber)')

for secret in postgres-password jwt-secret admin-password redis-auth; do
gcloud secrets add-iam-policy-binding $secret \
--member="serviceAccount:${PROJECT_NUMBER}[email protected]" \
--role="roles/secretmanager.secretAccessor"
done

Step 7: Deploy Cloud Run Services

cl-backend

REDIS_HOST=$(gcloud redis instances describe cl-redis --region=us-central1 --format='value(host)')
PG_IP=$(gcloud sql instances describe cl-postgres --format='value(ipAddresses[0].ipAddress)')

gcloud run deploy cl-backend \
--image=us-central1-docker.pkg.dev/<project-id>/cl-images/cl-backend:latest \
--region=us-central1 \
--platform=managed \
--port=8000 \
--cpu=1 \
--memory=2Gi \
--min-instances=1 \
--max-instances=10 \
--vpc-connector=cl-connector \
--vpc-egress=private-ranges-only \
--ingress=internal \
--no-allow-unauthenticated \
--set-env-vars="\
APP_ENV=production,\
LOG_LEVEL=INFO,\
MAX_UPLOAD_SIZE_MB=100,\
POSTGRES_USER=cl_user,\
POSTGRES_DB=contract_lucidity,\
POSTGRES_HOST=${PG_IP},\
POSTGRES_PORT=5432,\
REDIS_URL=redis://${REDIS_HOST}:6379/0,\
CELERY_BROKER_URL=redis://${REDIS_HOST}:6379/0,\
CELERY_RESULT_BACKEND=redis://${REDIS_HOST}:6379/1,\
JWT_ALGORITHM=HS256,\
JWT_ACCESS_TOKEN_EXPIRE_MINUTES=60,\
JWT_REFRESH_TOKEN_EXPIRE_DAYS=7,\
STORAGE_PATH=/data/storage,\
CORS_ORIGINS=https://your-domain.com,\
FRONTEND_URL=https://your-domain.com,\
[email protected]" \
--set-secrets="\
POSTGRES_PASSWORD=postgres-password:latest,\
JWT_SECRET_KEY=jwt-secret:latest,\
DEFAULT_ADMIN_PASSWORD=admin-password:latest" \
--execution-environment=gen2 \
--add-volume=name=cl-storage,type=cloud-storage,bucket=cl-documents-<project-id> \
--add-volume-mount=volume=cl-storage,mount-path=/data/storage

cl-worker

gcloud run deploy cl-worker \
--image=us-central1-docker.pkg.dev/<project-id>/cl-images/cl-worker:latest \
--region=us-central1 \
--platform=managed \
--no-port \
--cpu=2 \
--memory=4Gi \
--min-instances=1 \
--max-instances=5 \
--vpc-connector=cl-connector \
--vpc-egress=private-ranges-only \
--ingress=internal \
--no-allow-unauthenticated \
--set-env-vars="\
APP_ENV=production,\
LOG_LEVEL=INFO,\
CELERY_CONCURRENCY=4,\
POSTGRES_USER=cl_user,\
POSTGRES_DB=contract_lucidity,\
POSTGRES_HOST=${PG_IP},\
POSTGRES_PORT=5432,\
REDIS_URL=redis://${REDIS_HOST}:6379/0,\
CELERY_BROKER_URL=redis://${REDIS_HOST}:6379/0,\
CELERY_RESULT_BACKEND=redis://${REDIS_HOST}:6379/1,\
STORAGE_PATH=/data/storage" \
--set-secrets="\
POSTGRES_PASSWORD=postgres-password:latest" \
--execution-environment=gen2 \
--add-volume=name=cl-storage,type=cloud-storage,bucket=cl-documents-<project-id> \
--add-volume-mount=volume=cl-storage,mount-path=/data/storage
Cloud Run and Long-Running Workers

Cloud Run is designed for request-driven workloads. The Celery worker runs continuously and does not serve HTTP requests. You have two options:

  1. Cloud Run with --no-cpu-throttling and --min-instances=1 -- keeps the worker always running (recommended for simplicity)
  2. Compute Engine or GKE -- run the worker on a VM or Kubernetes pod for traditional always-on behavior

Option 1 works well for most deployments. Add --no-cpu-throttling to the worker deploy command.

# Update worker to keep CPU always allocated
gcloud run services update cl-worker \
--region=us-central1 \
--no-cpu-throttling \
--min-instances=1

cl-frontend

BACKEND_URL=$(gcloud run services describe cl-backend --region=us-central1 --format='value(status.url)')

gcloud run deploy cl-frontend \
--image=us-central1-docker.pkg.dev/<project-id>/cl-images/cl-frontend:latest \
--region=us-central1 \
--platform=managed \
--port=3000 \
--cpu=1 \
--memory=2Gi \
--min-instances=1 \
--max-instances=10 \
--allow-unauthenticated \
--set-env-vars="\
BACKEND_INTERNAL_URL=${BACKEND_URL},\
NEXT_PUBLIC_FRONTEND_URL=https://your-domain.com"

Step 8: Load Balancer and Custom Domain

# Reserve a global static IP
gcloud compute addresses create cl-ip --global

# Create a serverless NEG for the frontend
gcloud compute network-endpoint-groups create cl-frontend-neg \
--region=us-central1 \
--network-endpoint-type=serverless \
--cloud-run-service=cl-frontend

# Create backend service
gcloud compute backend-services create cl-backend-service \
--global \
--load-balancing-scheme=EXTERNAL_MANAGED

gcloud compute backend-services add-backend cl-backend-service \
--global \
--network-endpoint-group=cl-frontend-neg \
--network-endpoint-group-region=us-central1

# Create URL map
gcloud compute url-maps create cl-url-map \
--default-service=cl-backend-service

# Create managed SSL certificate
gcloud compute ssl-certificates create cl-cert \
--domains=your-domain.com \
--global

# Create HTTPS proxy
gcloud compute target-https-proxies create cl-https-proxy \
--url-map=cl-url-map \
--ssl-certificates=cl-cert

# Create forwarding rule
gcloud compute forwarding-rules create cl-https-rule \
--global \
--address=cl-ip \
--target-https-proxy=cl-https-proxy \
--ports=443

# Create HTTP -> HTTPS redirect
gcloud compute url-maps import cl-http-redirect --source=- << 'EOF'
name: cl-http-redirect
defaultUrlRedirect:
httpsRedirect: true
redirectResponseCode: MOVED_PERMANENTLY_DEFAULT
EOF

gcloud compute target-http-proxies create cl-http-proxy \
--url-map=cl-http-redirect

gcloud compute forwarding-rules create cl-http-rule \
--global \
--address=cl-ip \
--target-http-proxy=cl-http-proxy \
--ports=80

# Get the IP address for DNS configuration
gcloud compute addresses describe cl-ip --global --format='value(address)'

Point your domain's A record to the static IP address.

Cost Estimate

Estimated monthly costs (as of March 2026) for a production deployment in us-central1:

ServiceSpecificationEstimated Monthly Cost
Cloud Run (frontend)1-10 instances, 1 vCPU / 2 Gi~$40
Cloud Run (backend)1-10 instances, 1 vCPU / 2 Gi~$40
Cloud Run (worker)1-5 instances, 2 vCPU / 4 Gi, always-on~$90
Cloud SQL PostgreSQLdb-custom-2-4096, HA, 50 GB SSD~$120
Memorystore RedisStandard, 1 GB~$60
Cloud StorageStandard, 100 GB~$2
Artifact RegistryImage storage~$1
Secret Manager4 secrets~$1
Load BalancerGlobal HTTPS LB~$20
VPC Connectore2-micro~$7
Total~$381/month
Cost Optimization
  • Cloud Run free tier includes 180,000 vCPU-seconds and 360,000 GiB-seconds per month per billing account.
  • Committed use discounts on Cloud SQL save up to 40% (3-year) or 20% (1-year).
  • Cloud Storage lifecycle rules automatically move old documents to Nearline (cheaper) storage.
  • Scale frontend and backend to zero instances during off-hours if acceptable latency on cold start is tolerable.
  • Use Cloud Run jobs for batch processing workloads instead of always-on workers.

Verification

# Check Cloud Run services
gcloud run services list --region=us-central1

# View logs
gcloud run services logs read cl-backend --region=us-central1 --limit=50
gcloud run services logs read cl-worker --region=us-central1 --limit=50
gcloud run services logs read cl-frontend --region=us-central1 --limit=50

# Check Cloud SQL
gcloud sql instances describe cl-postgres --format='value(state)'

# Check Memorystore
gcloud redis instances describe cl-redis --region=us-central1 --format='value(state)'

# Test the application
curl -I https://your-domain.com

CI/CD with Cloud Build

Automate deployments with Cloud Build:

cloudbuild.yaml
steps:
# Build and push backend
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-backend:$COMMIT_SHA', './backend', '-f', './backend/Dockerfile']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-backend:$COMMIT_SHA']

# Build and push worker
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-worker:$COMMIT_SHA', './backend', '-f', './backend/Dockerfile.worker']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-worker:$COMMIT_SHA']

# Build and push frontend
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-frontend:$COMMIT_SHA', './frontend', '-f', './frontend/Dockerfile']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-frontend:$COMMIT_SHA']

# Deploy
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'cl-backend', '--image', 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-backend:$COMMIT_SHA', '--region', 'us-central1']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'cl-worker', '--image', 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-worker:$COMMIT_SHA', '--region', 'us-central1']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'cl-frontend', '--image', 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-frontend:$COMMIT_SHA', '--region', 'us-central1']

images:
- 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-backend:$COMMIT_SHA'
- 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-worker:$COMMIT_SHA'
- 'us-central1-docker.pkg.dev/$PROJECT_ID/cl-images/cl-frontend:$COMMIT_SHA'
Pricing Disclaimer

Verify current pricing at cloud.google.com/pricing. GCP pricing changes frequently and may vary by region.