Workflow Concepts
Understand what workflows are, how they execute, and the building blocks you combine to create automations.
What Is a Workflow?
A workflow is a directed graph of operations. Each operation is a node; the arrows between them are edges. When a workflow runs, the execution engine traverses the graph from trigger to leaf, executing each node in topological order and passing data along the edges.
You build workflows visually on the canvas — no code required. Drag nodes from the library, draw connections, configure parameters, and hit Test Run to see it work.
Anatomy of a Workflow
Every workflow has three core components:
- Trigger
- The entry point. A trigger determines when the workflow starts — on a schedule (cron), when a webhook receives an HTTP request, or manually via Test Run.
- Nodes
- The individual operations. Nodes can be logic operations (conditions, loops, transforms), AI operations (prompts, classification), or actions from connected services (send email, create Jira issue, post to Slack).
- Edges
- The connections between nodes. Edges define execution order and carry data from one node's output to the next node's input.
Workflow Lifecycle
A workflow moves through a set of statuses during its lifetime:
- Draft
- The initial state. The workflow exists in the editor but has never been executed. Edit freely — nothing will trigger until you activate it.
- Active
- The workflow is live. If it has a schedule trigger, it will execute on the defined cadence. If it has a webhook trigger, it will respond to incoming HTTP requests.
- Paused
- The workflow exists but is not triggering. Useful when you need to make changes without deleting the workflow. Reactivate it when ready.
- Archived
- Soft-deleted. The workflow no longer appears in the main list but can be restored if needed.
How Execution Works
When a workflow is triggered, the backend creates an Execution record and enqueues it for processing. The execution engine then:
- Resolves the graph into a topological order so nodes execute in the correct sequence.
- Executes each node, passing the output of predecessor nodes as input to successors.
- Saves a checkpoint after each node completes — if the process crashes, execution can resume from the last checkpoint.
- Broadcasts real-time status updates over WebSocket so you can watch the run live.
- Records the final result (success, failure, or cancellation) along with execution duration and AI token usage.
Parallel execution
Data Flow Between Nodes
Every node produces an output — a JSON object containing the result of its operation. Downstream nodes can reference this output using the data picker or by manually entering a data path.
For example, if a Gmail node fetches 5 messages, its output might look like {messages: [...]}. A downstream Slack node can reference {{gmail.messages[0].subject}} to include the first email's subject in the Slack message body.
The data mapping system is covered in detail in the Data Mapping guide.
Sub-Workflows
A sub-workflow is a workflow nested inside another workflow. It appears as a single node on the parent canvas, but when executed it runs the entire child workflow. This is useful for:
- Reusability — define a common sequence once and use it in multiple parent workflows.
- Organization — break large workflows into manageable, named pieces.
- Isolation — sub-workflows have their own execution context and error handling.
To use a sub-workflow, open the node library, find the workflow you want to embed (it appears under a "Sub-Workflows" category), and drag it onto the canvas.
Dive Deeper
Node Types Reference
A complete guide to every node: triggers, logic, actions, and AI.
Data Mapping
Learn how to reference, transform, and route data between nodes.
Conditions & Loops
Branching logic, iteration, and advanced flow control.
Templates & AI Builder
Start from a template or describe your workflow in plain English.