Automations
Configure automated workflows that execute actions on triggers
Automations
Automations are workflows that execute actions based on triggers. They connect your triggers (when) with your actions (what) and conditions (if).
Concept
An automation combines:
- Trigger — When to run (schedule, webhook, event)
- Conditions — Whether to run (filters, guards)
- Actions — What to execute (one or more, in sequence or DAG)
| Component | Examples |
|---|---|
| Trigger | schedule: "0 6 * * *", webhook: /orders, event: action.completed |
| Conditions | order.total > 100, status = "new", region in ["US", "CA"] |
| Actions | dlt_extract, call_agent, web_search |
Trigger Types
Schedule Triggers
Run automations on a cron schedule:
| Schedule | Cron Expression | Description |
|---|---|---|
| Every hour | 0 * * * * | Top of every hour |
| Daily at 2am | 0 2 * * * | Once per day |
| Every 6 hours | 0 */6 * * * | 4 times per day |
| Weekdays at 9am | 0 9 * * 1-5 | Monday-Friday |
| Monthly | 0 0 1 * * | First of month |
{
"trigger": {
"type": "schedule",
"schedule": "0 6 * * *",
"timezone": "America/New_York"
}
}Webhook Triggers
Trigger from external HTTP requests:
{
"trigger": {
"type": "webhook",
"path": "/hooks/shopify-orders",
"secret": "whsec_..."
}
}This creates an endpoint at:
https://vai-dev.virtuousai.com/webhooks/{org_id}/hooks/shopify-ordersWebhook URLs are unique per automation. The secret is used to verify requests are authentic.
Event Triggers
React to events within VirtuousAI:
{
"trigger": {
"type": "event",
"eventType": "action.completed",
"filters": {
"actionKind": "dlt_extract"
}
}
}Event types include:
action.completed,action.failedconnection.data_available,connection.errorautomation.completed,automation.failed
Automation Lifecycle
| Status | Description | Triggers Fire? |
|---|---|---|
active | Automation enabled and ready | Yes |
paused | Temporarily disabled | No |
error | Configuration issue detected | No |
Conditions
Add conditions to filter when automations actually execute:
Condition Operators
| Operator | Description | Example |
|---|---|---|
eq, ne | Equals, not equals | status eq "new" |
gt, gte, lt, lte | Numeric comparisons | orderTotal gt 100 |
contains | String contains | email contains "@company.com" |
startsWith, endsWith | String prefix/suffix | sku startsWith "PROD-" |
in, notIn | Array membership | region in ["US", "CA", "MX"] |
{
"conditions": [
{
"field": "event.data.orderTotal",
"operator": "gt",
"value": 100
}
]
}{
"conditions": [
{
"field": "event.data.status",
"operator": "eq",
"value": "new"
},
{
"field": "event.data.region",
"operator": "in",
"value": ["US", "CA"]
}
]
}All conditions must pass (AND logic)
{
"conditions": [
{
"field": "event.data.customer.tier",
"operator": "eq",
"value": "enterprise"
}
]
}Use dot notation for nested fields
Automation Runs
Each trigger creates an automation run that tracks execution:
| Field | Description |
|---|---|
runId | Unique identifier for this run |
automationId | Parent automation |
triggeredBy | What triggered it (schedule, webhook, event, manual) |
startedAt | When execution began |
completedAt | When execution finished |
status | running, completed, failed, cancelled |
actionRuns | List of action executions with results |
Multi-Action Workflows
Automations can execute multiple actions:
Sequential Execution
Actions execute in order. If any action fails, the workflow stops:
{
"actions": [
{ "kind": "dlt_extract", "definition": {...} },
{ "kind": "transform", "definition": {...} },
{ "kind": "notify", "definition": {...} }
]
}DAG Execution
For complex workflows, define dependencies between steps:
{
"actions": [
{ "key": "extract", "kind": "dlt_extract", "definition": {...} },
{ "key": "transform", "kind": "duckdb", "dependsOn": ["extract"], "definition": {...} },
{ "key": "load", "kind": "export", "dependsOn": ["transform"], "definition": {...} },
{ "key": "notify", "kind": "webhook", "dependsOn": ["load"], "definition": {...} }
]
}Failure Behavior
By default, if any action fails, the automation stops and marks the run as failed. Subsequent actions are skipped.
Step Types
Each step in an automation executes a specific action kind. Here are all available step types:
| Kind | Execution | Description |
|---|---|---|
dlt_extract | Async | Extract data from sources (Shopify, Klaviyo, etc.) to bronze layer |
duckdb_transform | Async | Transform data using DuckDB SQL queries |
http_request | Sync | Make HTTP requests to external APIs |
web_search | Sync | Search the web for information |
fetch_page | Sync | Extract readable content from a URL |
call_agent | Async | Delegate work to an AI agent |
create_flashboard | Async | Generate a flashboard dashboard |
approval_gate | Sync | Pause workflow and wait for human approval |
conditional_branch | Sync | Evaluate conditions and route execution flow |
call_automation | Async | Invoke another automation as a sub-workflow |
Step Configuration
Each step in a DAG is configured with the StepConfig structure:
| Field | Type | Required | Description |
|---|---|---|---|
key | string | Yes | Unique identifier. Must start with lowercase letter, contain only [a-z0-9_] |
name | string | No | Display name for the step |
action | object | Yes | Either inline definition or action reference |
depends_on | array | No | List of step dependencies |
condition | object | No | Conditional execution rule |
Action Configuration
Steps support two modes for defining their action:
Define the action directly in the step:
{
"key": "extract_data",
"action": {
"kind": "dlt_extract",
"definition": {
"kind": "dlt_extract",
"connection": { "slug": "shopify" },
"source": "shopify",
"resources": ["orders"]
}
}
}Reference a pre-configured saved action:
{
"key": "extract_data",
"action": {
"action_id": "act_abc123def456"
}
}This mode reuses an existing action's configuration, making it easier to share common actions across automations.
Dependencies with Port Routing
Dependencies can include port specifications for multi-output actions (like approval_gate and conditional_branch):
{
"key": "process_approved",
"action": { "kind": "duckdb_transform", "definition": {...} },
"depends_on": [
{ "step": "approval_step", "port": "approved" }
]
}| Dependency Format | Description |
|---|---|
"step_key" | Simple dependency on step completion |
{ "step": "key" } | Explicit object format |
{ "step": "key", "port": "approved" } | Port-based routing for multi-output actions |
When a step has multiple incoming edges from the same multi-output source, ANY matching port allows execution (OR logic). This enables flexible routing patterns.
Reference Resolution
Steps can dynamically reference values from automation inputs and previous step results using the $/ syntax.
Syntax
| Pattern | Resolves To |
|---|---|
$/inputs/{param} | Automation input parameter |
$/inputs/{param}/{nested} | Nested field in input |
$/steps/{key}/result | Full result of a completed step |
$/steps/{key}/result/{field} | Specific field from step result |
$/steps/{key}/status | Status of a step (completed, failed, etc.) |
Examples
Reference an input parameter passed when triggering the automation:
{
"key": "extract",
"action": {
"kind": "dlt_extract",
"definition": {
"kind": "dlt_extract",
"source": "shopify",
"start_date": "$/inputs/sync_start_date"
}
}
}Reference the result of a previous step:
{
"key": "transform",
"action": {
"kind": "duckdb_transform",
"definition": {
"kind": "duckdb_transform",
"sql": "SELECT * FROM read_parquet('$/steps/extract/result/destination_path')"
}
},
"depends_on": ["extract"]
}Access deeply nested values:
{
"key": "notify",
"action": {
"kind": "http_request",
"definition": {
"kind": "http_request",
"method": "POST",
"body": {
"rows_extracted": "$/steps/extract/result/per_resource_rows/orders",
"customer_email": "$/inputs/notification/email"
}
}
},
"depends_on": ["extract"]
}Object Reference Format
References can also be wrapped in an object for clarity:
{
"connection": {
"kind": "reference",
"ref": "$/steps/setup/result/connection_id"
}
}Static values can be explicitly marked:
{
"resources": {
"kind": "static",
"values": ["orders", "products"]
}
}DAG Orchestration
The DAG orchestrator manages parallel execution and dependency resolution.
Parallel Execution
The max_parallel setting limits concurrent step execution (default: 3):
{
"name": "Data Pipeline",
"max_parallel": 5,
"steps": [...]
}| Setting | Effect |
|---|---|
max_parallel: 1 | Sequential execution, one step at a time |
max_parallel: 3 | Up to 3 steps run concurrently (default) |
max_parallel: 10 | High parallelism for independent steps |
Step Execution Flow
Port-Based Routing
Multi-output actions (approval_gate, conditional_branch) use ports to route execution:
{
"steps": [
{ "key": "check_data", "action": { "kind": "conditional_branch", ... } },
{
"key": "process_data",
"depends_on": [{ "step": "check_data", "port": "true" }]
},
{
"key": "handle_empty",
"depends_on": [{ "step": "check_data", "port": "false" }]
}
]
}Only one downstream path executes based on the action's output port.
Idempotency
The orchestrator ensures steps are not duplicated:
- Each step has a concurrency key based on run ID and step key
- Database constraints prevent duplicate dispatches
- If two workers race to dispatch the same step, only one succeeds
Step Conditions
Steps can execute conditionally based on the status of their dependencies.
| Condition | Description | Use Case |
|---|---|---|
on_success | Only if ALL dependencies completed successfully | Normal flow |
on_error | Only if ANY dependency failed | Error handling |
always | Regardless of dependency status | Cleanup, notifications |
Examples
Run only when a previous step fails:
{
"key": "handle_extraction_error",
"action": {
"kind": "http_request",
"definition": {
"kind": "http_request",
"method": "POST",
"url": "https://alerts.example.com/webhook",
"body": { "error": "Extraction failed" }
}
},
"depends_on": ["extract_data"],
"condition": { "type": "on_error" }
}Always run regardless of previous step status:
{
"key": "cleanup_temp_files",
"action": { "kind": "http_request", "definition": {...} },
"depends_on": ["transform_data"],
"condition": { "type": "always" }
}Notify only on successful completion:
{
"key": "notify_success",
"action": { "kind": "http_request", "definition": {...} },
"depends_on": ["load_to_warehouse"],
"condition": { "type": "on_success" }
}Without a condition specified, steps only run when ALL dependencies complete successfully (implicit on_success).
Creating Automations
{
"name": "Daily Order Sync",
"description": "Sync Shopify orders every day at 2am ET",
"trigger": {
"type": "schedule",
"schedule": "0 2 * * *",
"timezone": "America/New_York"
},
"actions": [
{ "kind": "dlt_extract", "definition": { "source": "shopify", "resources": ["orders"] } }
],
"enabled": true
}{
"name": "Process New Data",
"description": "Transform data when extraction completes",
"trigger": {
"type": "event",
"eventType": "action.completed",
"filters": { "kind": "dlt_extract" }
},
"conditions": [
{ "field": "event.data.recordCount", "operator": "gt", "value": 0 }
],
"actions": [
{ "kind": "transform", "definition": {...} }
],
"enabled": true
}{
"name": "External Trigger",
"description": "Run sync when triggered by external system",
"trigger": {
"type": "webhook",
"path": "/external/sync",
"secret": "whsec_your_secret_here"
},
"actions": [
{ "kind": "dlt_extract", "definition": {...} }
],
"enabled": true
}Crystallized Automations
Automations can be created manually or crystallized from successful agent runs and chat threads. Crystallization extracts the execution DAG from an AI-driven workflow and converts it into a deterministic automation.
| Source Type | Origin | How Created |
|---|---|---|
MANUAL | Created directly via API or UI | Standard creation flow |
CRYSTALLIZED | Extracted from agent run or chat thread | POST /agents/runs/{id}/crystallize or POST /chat/threads/{id}/crystallize |
Crystallized automations are fully editable — you can modify steps, add triggers, adjust conditions, or extend the workflow after creation. They behave identically to manually created automations.
See Crystallization for a detailed explanation of how ad-hoc AI work becomes repeatable automation.
Best Practices
- Start simple — Begin with single-action automations, add complexity gradually
- Use conditions wisely — Filter early to avoid unnecessary action executions
- Monitor runs — Check automation run history regularly, set up failure alerts
- Test with manual triggers — Use the trigger endpoint to test before enabling schedules
- Use descriptive names — Include frequency and purpose (e.g., "Daily 2am - Sync Orders")
- Consider time zones — Always specify timezone for scheduled automations
OpenAPI Reference
For detailed endpoint schemas, request/response formats, and authentication: