Skip to main content

Integration Guide

This guide covers common integration patterns for connecting external systems to Contract Lucidity's API.

General Integration Pattern

All integrations follow the same high-level flow:

  1. Authenticate — Exchange client credentials for a JWT access token.
  2. Upload — POST the document with multipart/form-data.
  3. Poll — Check pipeline_status on the document until it reaches complete or failed.
  4. Retrieve — Fetch the report, obligations, or other analysis data.
  5. Act — Push results to downstream systems (SharePoint, Teams, email, database, etc.).

Polling for Pipeline Status

After uploading a document, poll GET /api/v1/documents/{id} and check the pipeline_status field. The pipeline progresses through these stages in order:

queued → extracting → classifying → playbook → analyzing → embedding → storing → complete

If processing fails, the status will be failed and the response will include error_message with details.

Recommended polling interval: every 5 seconds. A typical document completes in 30-120 seconds depending on size and AI provider response times.

# Poll until complete or failed
while true; do
STATUS=$(curl -s https://<your-instance>/api/v1/documents/<doc-id> \
-H "Authorization: Bearer <access_token>" | jq -r '.pipeline_status')

echo "Status: $STATUS"

if [ "$STATUS" = "complete" ] || [ "$STATUS" = "failed" ]; then
break
fi

sleep 5
done

Webhook Support

Webhook notifications (push-based instead of polling) are planned but not yet available. For now, use polling as described above.


Power Platform Custom Connector

Microsoft Power Automate and Power Apps can integrate with Contract Lucidity using a custom connector.

Approach: HTTP Actions with Token Management

The simplest approach uses standard HTTP actions rather than a formal custom connector, since Power Automate handles the token exchange natively.

Step 1: Store Credentials

  1. In Power Automate, create a flow with a trigger (e.g., "When a file is created in SharePoint").
  2. Add an HTTP action to exchange credentials for a token:
SettingValue
MethodPOST
URIhttps://<your-instance>/api/v1/auth/token
HeadersContent-Type: application/json
Body{"client_id": "<your-client-id>", "client_secret": "<your-client-secret>"}
  1. Parse the response JSON to extract access_token.
tip

Store the client ID and secret in Azure Key Vault and reference them via the Key Vault connector rather than hardcoding them in the flow.

Step 2: Upload Document

Add another HTTP action:

SettingValue
MethodPOST
URIhttps://<your-instance>/api/v1/documents
HeadersAuthorization: Bearer <access_token from Step 1>
BodyForm-data with file (from trigger) and project_id

Step 3: Poll for Completion

Add a Do Until loop:

  1. HTTP GET https://<your-instance>/api/v1/documents/<document-id>
  2. Check if pipeline_status equals complete or failed.
  3. Add a Delay of 10 seconds between iterations.
  4. Set a timeout (e.g., 10 minutes) to avoid infinite loops.

Step 4: Retrieve and Act

Once complete:

  • HTTP GET /api/v1/documents/{id}/report to retrieve the analysis.
  • Use the SharePoint connector to upload the report or update a list item.
  • Use the Teams connector to post a notification.
  • Use the Outlook connector to email stakeholders.

Example Flow Summary

Alternative: Formal Custom Connector

If you prefer a reusable custom connector:

  1. Export the OpenAPI spec from https://<your-instance>/api/docs (Swagger UI) or https://<your-instance>/openapi.json.
  2. In Power Platform, go to Data > Custom connectors > New > Import an OpenAPI file.
  3. Configure authentication as API Key with the header name Authorization and value Bearer <token>.
  4. Test the connector and publish.
note

The formal custom connector approach requires manually managing token refresh since Power Platform's API Key auth type does not handle token expiry automatically. The HTTP action approach described above gives you full control over token lifecycle.


Postman

Postman is useful for exploring the API during development and testing.

Import the OpenAPI Spec

  1. Open Postman and click Import.
  2. Enter the URL: https://<your-instance>/openapi.json
  3. Postman will create a collection with all available endpoints.

Set Up Environment Variables

Create a Postman environment with these variables:

VariableInitial ValueDescription
base_urlhttps://<your-instance>/api/v1API base URL
client_idYour client IDAPI credential
client_secretYour client secretAPI credential
access_token(empty)Populated automatically

Auto-Fetch Token with Pre-Request Script

Add this pre-request script to the collection to automatically fetch a token before each request:

const tokenUrl = pm.environment.get("base_url") + "/auth/token";
const clientId = pm.environment.get("client_id");
const clientSecret = pm.environment.get("client_secret");

pm.sendRequest({
url: tokenUrl,
method: "POST",
header: { "Content-Type": "application/json" },
body: {
mode: "raw",
raw: JSON.stringify({
client_id: clientId,
client_secret: clientSecret
})
}
}, function (err, res) {
if (err) {
console.error(err);
} else {
const token = res.json().access_token;
pm.environment.set("access_token", token);
}
});

Then set the collection's Authorization to Bearer Token with the value {{access_token}}.


Python

A minimal Python integration using the requests library:

import requests
import time

BASE_URL = "https://<your-instance>/api/v1"

# Authenticate
auth = requests.post(f"{BASE_URL}/auth/token", json={
"client_id": "your-client-id",
"client_secret": "your-client-secret"
})
token = auth.json()["access_token"]
headers = {"Authorization": f"Bearer {token}"}

# Upload document
with open("contract.pdf", "rb") as f:
upload = requests.post(
f"{BASE_URL}/documents",
headers=headers,
files={"file": f},
data={"project_id": "your-project-uuid"}
)
doc_id = upload.json()["id"]

# Poll for completion
while True:
doc = requests.get(f"{BASE_URL}/documents/{doc_id}", headers=headers).json()
status = doc["pipeline_status"]
print(f"Status: {status}")
if status in ("complete", "failed"):
break
time.sleep(5)

# Retrieve report
if status == "complete":
report = requests.get(
f"{BASE_URL}/documents/{doc_id}/report",
headers=headers
).json()
print(f"Risk: {report['overall_risk']}")
print(f"Summary: {report['executive_summary']}")

Best Practices

AreaRecommendation
Token cachingCache the access token and reuse it for its full 24-hour lifetime. Do not request a new token for every API call.
Error handlingAlways check HTTP status codes. Retry on 500 with exponential backoff. Do not retry on 401 — re-authenticate instead.
TimeoutsSet HTTP timeouts on all requests (30 seconds for most endpoints, 60 seconds for document uploads).
Bulk uploadsUpload documents sequentially, not in parallel. The worker queue handles concurrency internally.
IdempotencyDocument uploads are not idempotent. Uploading the same file twice creates two separate documents. Track upload state in your integration to avoid duplicates.
SecurityNever log or store access tokens in plain text. Use secret managers for client credentials.