VirtuousAI
Primitives

Automations

Configure automated workflows that execute actions on triggers

Automations

Automations are workflows that execute actions based on triggers. They connect your triggers (when) with your actions (what) and conditions (if).

Concept

An automation combines:

  • Trigger — When to run (schedule, webhook, event)
  • Conditions — Whether to run (filters, guards)
  • Actions — What to execute (one or more, in sequence or DAG)

ComponentExamples
Triggerschedule: "0 6 * * *", webhook: /orders, event: action.completed
Conditionsorder.total > 100, status = "new", region in ["US", "CA"]
Actionsdlt_extract, call_agent, web_search

Trigger Types

Schedule Triggers

Run automations on a cron schedule:

ScheduleCron ExpressionDescription
Every hour0 * * * *Top of every hour
Daily at 2am0 2 * * *Once per day
Every 6 hours0 */6 * * *4 times per day
Weekdays at 9am0 9 * * 1-5Monday-Friday
Monthly0 0 1 * *First of month
{
  "trigger": {
    "type": "schedule",
    "schedule": "0 6 * * *",
    "timezone": "America/New_York"
  }
}

Webhook Triggers

Trigger from external HTTP requests:

{
  "trigger": {
    "type": "webhook",
    "path": "/hooks/shopify-orders",
    "secret": "whsec_..."
  }
}

This creates an endpoint at:

https://vai-dev.virtuousai.com/webhooks/{org_id}/hooks/shopify-orders

Webhook URLs are unique per automation. The secret is used to verify requests are authentic.

Event Triggers

React to events within VirtuousAI:

{
  "trigger": {
    "type": "event",
    "eventType": "action.completed",
    "filters": {
      "actionKind": "dlt_extract"
    }
  }
}

Event types include:

  • action.completed, action.failed
  • connection.data_available, connection.error
  • automation.completed, automation.failed

Automation Lifecycle

StatusDescriptionTriggers Fire?
activeAutomation enabled and readyYes
pausedTemporarily disabledNo
errorConfiguration issue detectedNo

Conditions

Add conditions to filter when automations actually execute:

Condition Operators

OperatorDescriptionExample
eq, neEquals, not equalsstatus eq "new"
gt, gte, lt, lteNumeric comparisonsorderTotal gt 100
containsString containsemail contains "@company.com"
startsWith, endsWithString prefix/suffixsku startsWith "PROD-"
in, notInArray membershipregion in ["US", "CA", "MX"]
{
  "conditions": [
    {
      "field": "event.data.orderTotal",
      "operator": "gt",
      "value": 100
    }
  ]
}
{
  "conditions": [
    {
      "field": "event.data.status",
      "operator": "eq",
      "value": "new"
    },
    {
      "field": "event.data.region",
      "operator": "in",
      "value": ["US", "CA"]
    }
  ]
}

All conditions must pass (AND logic)

{
  "conditions": [
    {
      "field": "event.data.customer.tier",
      "operator": "eq",
      "value": "enterprise"
    }
  ]
}

Use dot notation for nested fields

Automation Runs

Each trigger creates an automation run that tracks execution:

FieldDescription
runIdUnique identifier for this run
automationIdParent automation
triggeredByWhat triggered it (schedule, webhook, event, manual)
startedAtWhen execution began
completedAtWhen execution finished
statusrunning, completed, failed, cancelled
actionRunsList of action executions with results

Multi-Action Workflows

Automations can execute multiple actions:

Sequential Execution

Actions execute in order. If any action fails, the workflow stops:

{
  "actions": [
    { "kind": "dlt_extract", "definition": {...} },
    { "kind": "transform", "definition": {...} },
    { "kind": "notify", "definition": {...} }
  ]
}

DAG Execution

For complex workflows, define dependencies between steps:

{
  "actions": [
    { "key": "extract", "kind": "dlt_extract", "definition": {...} },
    { "key": "transform", "kind": "duckdb", "dependsOn": ["extract"], "definition": {...} },
    { "key": "load", "kind": "export", "dependsOn": ["transform"], "definition": {...} },
    { "key": "notify", "kind": "webhook", "dependsOn": ["load"], "definition": {...} }
  ]
}

Failure Behavior

By default, if any action fails, the automation stops and marks the run as failed. Subsequent actions are skipped.

Step Types

Each step in an automation executes a specific action kind. Here are all available step types:

KindExecutionDescription
dlt_extractAsyncExtract data from sources (Shopify, Klaviyo, etc.) to bronze layer
duckdb_transformAsyncTransform data using DuckDB SQL queries
http_requestSyncMake HTTP requests to external APIs
web_searchSyncSearch the web for information
fetch_pageSyncExtract readable content from a URL
call_agentAsyncDelegate work to an AI agent
create_flashboardAsyncGenerate a flashboard dashboard
approval_gateSyncPause workflow and wait for human approval
conditional_branchSyncEvaluate conditions and route execution flow
call_automationAsyncInvoke another automation as a sub-workflow

Step Configuration

Each step in a DAG is configured with the StepConfig structure:

FieldTypeRequiredDescription
keystringYesUnique identifier. Must start with lowercase letter, contain only [a-z0-9_]
namestringNoDisplay name for the step
actionobjectYesEither inline definition or action reference
depends_onarrayNoList of step dependencies
conditionobjectNoConditional execution rule

Action Configuration

Steps support two modes for defining their action:

Define the action directly in the step:

{
  "key": "extract_data",
  "action": {
    "kind": "dlt_extract",
    "definition": {
      "kind": "dlt_extract",
      "connection": { "slug": "shopify" },
      "source": "shopify",
      "resources": ["orders"]
    }
  }
}

Reference a pre-configured saved action:

{
  "key": "extract_data",
  "action": {
    "action_id": "act_abc123def456"
  }
}

This mode reuses an existing action's configuration, making it easier to share common actions across automations.

Dependencies with Port Routing

Dependencies can include port specifications for multi-output actions (like approval_gate and conditional_branch):

{
  "key": "process_approved",
  "action": { "kind": "duckdb_transform", "definition": {...} },
  "depends_on": [
    { "step": "approval_step", "port": "approved" }
  ]
}
Dependency FormatDescription
"step_key"Simple dependency on step completion
{ "step": "key" }Explicit object format
{ "step": "key", "port": "approved" }Port-based routing for multi-output actions

When a step has multiple incoming edges from the same multi-output source, ANY matching port allows execution (OR logic). This enables flexible routing patterns.

Reference Resolution

Steps can dynamically reference values from automation inputs and previous step results using the $/ syntax.

Syntax

PatternResolves To
$/inputs/{param}Automation input parameter
$/inputs/{param}/{nested}Nested field in input
$/steps/{key}/resultFull result of a completed step
$/steps/{key}/result/{field}Specific field from step result
$/steps/{key}/statusStatus of a step (completed, failed, etc.)

Examples

Reference an input parameter passed when triggering the automation:

{
  "key": "extract",
  "action": {
    "kind": "dlt_extract",
    "definition": {
      "kind": "dlt_extract",
      "source": "shopify",
      "start_date": "$/inputs/sync_start_date"
    }
  }
}

Reference the result of a previous step:

{
  "key": "transform",
  "action": {
    "kind": "duckdb_transform",
    "definition": {
      "kind": "duckdb_transform",
      "sql": "SELECT * FROM read_parquet('$/steps/extract/result/destination_path')"
    }
  },
  "depends_on": ["extract"]
}

Access deeply nested values:

{
  "key": "notify",
  "action": {
    "kind": "http_request",
    "definition": {
      "kind": "http_request",
      "method": "POST",
      "body": {
        "rows_extracted": "$/steps/extract/result/per_resource_rows/orders",
        "customer_email": "$/inputs/notification/email"
      }
    }
  },
  "depends_on": ["extract"]
}

Object Reference Format

References can also be wrapped in an object for clarity:

{
  "connection": {
    "kind": "reference",
    "ref": "$/steps/setup/result/connection_id"
  }
}

Static values can be explicitly marked:

{
  "resources": {
    "kind": "static",
    "values": ["orders", "products"]
  }
}

DAG Orchestration

The DAG orchestrator manages parallel execution and dependency resolution.

Parallel Execution

The max_parallel setting limits concurrent step execution (default: 3):

{
  "name": "Data Pipeline",
  "max_parallel": 5,
  "steps": [...]
}
SettingEffect
max_parallel: 1Sequential execution, one step at a time
max_parallel: 3Up to 3 steps run concurrently (default)
max_parallel: 10High parallelism for independent steps

Step Execution Flow

Port-Based Routing

Multi-output actions (approval_gate, conditional_branch) use ports to route execution:

{
  "steps": [
    { "key": "check_data", "action": { "kind": "conditional_branch", ... } },
    {
      "key": "process_data",
      "depends_on": [{ "step": "check_data", "port": "true" }]
    },
    {
      "key": "handle_empty",
      "depends_on": [{ "step": "check_data", "port": "false" }]
    }
  ]
}

Only one downstream path executes based on the action's output port.

Idempotency

The orchestrator ensures steps are not duplicated:

  • Each step has a concurrency key based on run ID and step key
  • Database constraints prevent duplicate dispatches
  • If two workers race to dispatch the same step, only one succeeds

Step Conditions

Steps can execute conditionally based on the status of their dependencies.

ConditionDescriptionUse Case
on_successOnly if ALL dependencies completed successfullyNormal flow
on_errorOnly if ANY dependency failedError handling
alwaysRegardless of dependency statusCleanup, notifications

Examples

Run only when a previous step fails:

{
  "key": "handle_extraction_error",
  "action": {
    "kind": "http_request",
    "definition": {
      "kind": "http_request",
      "method": "POST",
      "url": "https://alerts.example.com/webhook",
      "body": { "error": "Extraction failed" }
    }
  },
  "depends_on": ["extract_data"],
  "condition": { "type": "on_error" }
}

Always run regardless of previous step status:

{
  "key": "cleanup_temp_files",
  "action": { "kind": "http_request", "definition": {...} },
  "depends_on": ["transform_data"],
  "condition": { "type": "always" }
}

Notify only on successful completion:

{
  "key": "notify_success",
  "action": { "kind": "http_request", "definition": {...} },
  "depends_on": ["load_to_warehouse"],
  "condition": { "type": "on_success" }
}

Without a condition specified, steps only run when ALL dependencies complete successfully (implicit on_success).

Creating Automations

{
  "name": "Daily Order Sync",
  "description": "Sync Shopify orders every day at 2am ET",
  "trigger": {
    "type": "schedule",
    "schedule": "0 2 * * *",
    "timezone": "America/New_York"
  },
  "actions": [
    { "kind": "dlt_extract", "definition": { "source": "shopify", "resources": ["orders"] } }
  ],
  "enabled": true
}
{
  "name": "Process New Data",
  "description": "Transform data when extraction completes",
  "trigger": {
    "type": "event",
    "eventType": "action.completed",
    "filters": { "kind": "dlt_extract" }
  },
  "conditions": [
    { "field": "event.data.recordCount", "operator": "gt", "value": 0 }
  ],
  "actions": [
    { "kind": "transform", "definition": {...} }
  ],
  "enabled": true
}
{
  "name": "External Trigger",
  "description": "Run sync when triggered by external system",
  "trigger": {
    "type": "webhook",
    "path": "/external/sync",
    "secret": "whsec_your_secret_here"
  },
  "actions": [
    { "kind": "dlt_extract", "definition": {...} }
  ],
  "enabled": true
}

Crystallized Automations

Automations can be created manually or crystallized from successful agent runs and chat threads. Crystallization extracts the execution DAG from an AI-driven workflow and converts it into a deterministic automation.

Source TypeOriginHow Created
MANUALCreated directly via API or UIStandard creation flow
CRYSTALLIZEDExtracted from agent run or chat threadPOST /agents/runs/{id}/crystallize or POST /chat/threads/{id}/crystallize

Crystallized automations are fully editable — you can modify steps, add triggers, adjust conditions, or extend the workflow after creation. They behave identically to manually created automations.

See Crystallization for a detailed explanation of how ad-hoc AI work becomes repeatable automation.

Best Practices

  1. Start simple — Begin with single-action automations, add complexity gradually
  2. Use conditions wisely — Filter early to avoid unnecessary action executions
  3. Monitor runs — Check automation run history regularly, set up failure alerts
  4. Test with manual triggers — Use the trigger endpoint to test before enabling schedules
  5. Use descriptive names — Include frequency and purpose (e.g., "Daily 2am - Sync Orders")
  6. Consider time zones — Always specify timezone for scheduled automations

OpenAPI Reference

For detailed endpoint schemas, request/response formats, and authentication:

On this page