The Strongly.AI Python SDK
The Strongly Python SDK gives you full programmatic access to the Strongly.AI platform from any Python application. Manage apps, run workflows, call AI models, track experiments, and control infrastructure - all from a single, typed client library with both sync and async support.
“One client library. Twenty-seven resource domains. Every capability the Strongly.AI platform offers - available from your Python code.
- 27 API resource domains - apps, workflows, AI gateway, addons, governance, FinOps, and more
- Typed Pydantic models for all request and response objects
- Automatic pagination, retry with exponential backoff, and error handling
- Full async support with
AsyncStrongly - MLOps helpers for experiment tracking and autologging
- Idempotency keys, event hooks, and structured logging
Installation
Install from PyPI. Requires Python 3.9 or later:
pip install strongly
Authentication
Create an API key in the Strongly UI under Profile > Security > REST API Keys, then pass it to the client:
from strongly import Strongly
# Pass the key directly
client = Strongly(api_key="sk-prod-...")
# Or set an environment variable (recommended)
# export STRONGLY_API_KEY=sk-prod-...
client = Strongly()
The SDK also reads credentials from ~/.strongly/config and auto-detects them inside Strongly workspaces. You never have to hardcode secrets in production.
Quick Start
Here is a complete example that deploys an app, runs a workflow, and calls an AI model - all in a few lines of Python:
from strongly import Strongly
client = Strongly()
# Deploy an app
app = client.apps.create({"name": "my-service", "runtime": "python3.11"})
client.apps.deploy(app.id)
# Run a workflow
result = client.workflows.execute("wf-abc123")
# Chat with an AI model
response = client.ai.inference.chat_completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.content)
Every response is a typed Pydantic model. You get full IDE autocompletion, inline docs, and compile-time safety - no guessing at dictionary keys or response shapes.
Core Resources
The SDK organizes the platform into intuitive resource namespaces. Each namespace maps directly to a set of platform capabilities, giving you a clear mental model of what you can build.
Apps
Deploy, manage, and monitor containerized applications. Use client.apps to create, deploy, scale, and check status.
Workflows
Build and execute data pipelines and automation workflows. Track execution progress node by node.
AI Inference
Run chat completions, embeddings, and text generation through the Strongly AI gateway with any model.
Addons
Provision PostgreSQL, Redis, and other managed services. Retrieve connection credentials instantly.
Governance
Enforce data retention policies, compliance rules, and attestation workflows across your infrastructure.
FinOps
Set budgets, track costs, and schedule resource groups. Keep your AI spend under control.
Apps
Deploy, manage, and monitor containerized applications:
# List running apps
for app in client.apps.list(status="running"):
print(f"{app.name} - {app.status}")
# Deploy and check status
client.apps.deploy(app_id)
status = client.apps.status(app_id)
print(f"Ready: {status.ready}")
Workflows
Build and execute data pipelines and automation workflows:
# Create and execute a workflow
wf = client.workflows.create({
"name": "Daily ETL",
"description": "Extract, transform, load customer data",
})
result = client.workflows.execute(wf.id)
# Track execution progress
progress = client.executions.progress(result["executionId"])
print(f"{progress.completed_nodes}/{progress.total_nodes} nodes done")
AI Inference
Run chat completions, embeddings, and text generation through the Strongly AI gateway:
# Chat completion
response = client.ai.inference.chat_completion(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain transformers in 3 sentences."},
],
temperature=0.7,
max_tokens=500,
)
print(response.content)
# Embeddings
result = client.ai.inference.embedding(
model="text-embedding-ada-002",
input=["Hello world", "Machine learning"],
)
print(f"{len(result.embeddings)} embeddings returned")
Addons (Managed Infrastructure)
Provision PostgreSQL, Redis, and other managed services:
addon = client.addons.create({
"label": "my-postgres",
"type": "postgresql",
"cpu": "500m",
"memory": "1Gi",
"disk": "10Gi",
})
client.addons.start(addon.id)
# Get connection credentials
creds = client.addons.credentials(addon.id)
print(f"Host: {creds.host}:{creds.port}")
Governance & FinOps
Enforce policies and track costs across your AI infrastructure:
# Create a governance policy
client.governance.policies.create({
"name": "data-retention-90d",
"description": "All data must be retained for 90 days",
"rules": [{"type": "retention", "days": 90}],
})
# Set a budget
client.finops.budgets.create({
"name": "Q1 ML Compute",
"amount": 5000,
"period": "monthly",
})
Pagination & Async Support
Pagination
All list() methods return auto-paginating iterators. You never have to deal with page tokens, cursors, or manual offset management:
# Iterate through all results automatically
for app in client.apps.list():
print(app.name)
# Get all items as a Python list
all_apps = client.apps.list().to_list()
# Get just the first match
first = client.apps.list(status="running").first()
Async Support
Every operation has a full async counterpart. The AsyncStrongly client mirrors the synchronous API exactly, so switching between sync and async is effortless:
import asyncio
from strongly import AsyncStrongly
async def main():
async with AsyncStrongly() as client:
async for workflow in client.workflows.list(status="active"):
print(workflow.name)
response = await client.ai.inference.chat_completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.content)
asyncio.run(main())
“The async client uses the same interface as the sync client - if you know one, you know both. Zero learning curve when scaling to concurrent workloads.
Error Handling
The SDK raises typed exceptions for every error condition. No more guessing at HTTP status codes or parsing error JSON - just catch the specific exception you care about:
from strongly import Strongly, NotFoundError, RateLimitError, ValidationError
client = Strongly()
try:
app = client.apps.retrieve("nonexistent")
except NotFoundError as e:
print(f"Not found: {e.message}")
except RateLimitError as e:
print(f"Rate limited - retrying in {e.retry_after}s")
except ValidationError as e:
print(f"Invalid input: {e.message}")
for detail in e.details:
print(f" {detail}")
For transient errors (network timeouts, 429s, 5xx responses), the SDK automatically retries with exponential backoff. You only need to handle exceptions when you want to customize the behavior.
MLOps Experiment Tracking
Track ML experiments with parameters, metrics, and artifacts using the built-in convenience helpers. The API is designed to feel familiar if you have used MLflow or Weights & Biases:
import strongly
strongly.set_experiment("churn-model")
with strongly.start_run(run_name="rf-baseline"):
strongly.log_params({"n_estimators": 100, "max_depth": 10})
strongly.log_metrics({"accuracy": 0.94, "f1": 0.91})
strongly.log_model(model, "classifier")
Parameter Tracking
Log hyperparameters, config dicts, and metadata with a single call. Compare across runs effortlessly.
Metrics & Artifacts
Track accuracy, loss, custom metrics, and artifacts like trained models, plots, and data snapshots.
Autologging
Automatic capture of framework-specific params and metrics for scikit-learn, PyTorch, and more.
Model Registry
Version, stage, and deploy models directly from experiment runs with full lineage tracking.
Idempotency & Hooks
For safe retries, pass an idempotency key on mutating requests. For observability, attach request and response hooks to monitor every API call:
# Idempotent create - safe to retry
app = client.apps.create(
{"name": "my-service"},
idempotency_key="create-my-service-v1",
)
# Event hooks for logging/monitoring
def on_request(method, url, **kwargs):
print(f"-> {method} {url}")
def on_response(method, url, status_code, **kwargs):
print(f"<- {status_code} {method} {url}")
client = Strongly(on_request=on_request, on_response=on_response)
Network failures happen. With an idempotency key, you can safely retry any create or update call without risking duplicate resources. The platform returns the original result if the key has already been seen.
All Resources
The SDK covers the full Strongly.AI platform API across 27 resource domains. Every method is typed, documented, and follows the same consistent patterns.
Client
client.apps Deploy, scale, monitorclient.workflows Workflow pipelinesclient.executions Execution historyclient.addons Managed servicesclient.datasources Data connectionsclient.ai.models Model catalogclient.ai.inference Completions, embeddingsclient.ai.provider_keys Provider key mgmtclient.ai.analytics AI usage analyticsclient.projects Project managementclient.workspaces Dev environmentsclient.volumes Persistent storageclient.fine_tuning Fine-tune LLMsclient.experiments ML trackingclient.automl Automated MLclient.model_registry Model versioningclient.governance Policies, attestationsclient.finops Costs, budgetsclient.users User managementclient.organizations Org managementReady to Get Started?
Install the SDK with pip install strongly and start building with the full power of the Strongly.AI platform from Python.