Skip to main content

Reusable workflows

Reusable workflows allow you to call one workflow from another, enabling modular workflow design and reducing duplication.

Making workflows callable

Mark a workflow as callable using trigger.callable: true:

id: deploy-model
name: Deploy Model

trigger:
callable: true

inputs:
model_path:
type: string
required: true
deploy_token:
type: string
required: true
secret: true # Masked in logs and UI

jobs:
deploy:
steps:
- name: Deploy
env:
TOKEN: ${{ .inputs.deploy_token }}
run: h2o deploy ${{ .inputs.model_path }} --token $TOKEN

Workflow ID and resource names

Each workflow has two identifiers:

IdentifierDescription
Workflow ID (id field)User-defined unique identifier that forms part of the resource name. Used when calling workflows with WorkflowCall. Required field.
Workflow Name (name field)Optional display name. Not unique. Multiple workflows can share the same display name. Not used for WorkflowCall references.

Resource name format

The full resource name combines the workspace ID and workflow ID:

/workspaces/{workspace-id}/workflows/{workflow-id}

Example: /workspaces/abc123/workflows/deploy-model

Reference formats in WorkflowCall

When calling workflows, you can use two formats:

Short ID (same workspace):

workflow:
name: deploy-model # Expands to /workspaces/{current-workspace}/workflows/deploy-model

Full resource name (cross-workspace calls):

workflow:
name: /workspaces/other-workspace-id/workflows/deploy-model

Calling workflows

Call a workflow from a job using the workflow field:

id: train-and-deploy
name: Train and Deploy

secrets:
- name: workspaces/abc123/secrets/deploy-token
as: deploy_token

jobs:
train:
steps:
- name: Train model
run: python train.py
- name: Upload model
upload:
path: ./model/
destination: drive://models/latest/

deploy:
depends_on: [train]
workflow:
name: deploy-model # Short ID - references workflow in same workspace
inputs:
model_path: "drive://models/latest/"
deploy_token: ${{ .secrets.deploy_token }}

Cross-workspace example:

deploy-to-prod:
workflow:
name: /workspaces/prod-workspace-123/workflows/deploy-service # Full resource name
inputs:
version: "v1.2.3"

Input passing

Regular inputs

Pass inputs using the workflow.inputs map. Expressions are supported:

jobs:
process:
workflow:
name: workflow-process-data
inputs:
dataset: ${{ .inputs.dataset_name }}
preprocessing: "standard"

Secret inputs

Mark inputs as secret: true to mask values in logs and the UI:

# Callable workflow
inputs:
api_key:
type: string
secret: true # Value will be masked

# Caller
secrets:
- name: workspaces/abc123/secrets/api-key
as: api_key

jobs:
call:
workflow:
name: workflow-api-integration
inputs:
api_key: ${{ .secrets.api_key }}

Behavior

Concurrency control

Each workflow uses its own concurrency settings independently. Called workflows do not inherit parent concurrency groups.

Failure propagation

When a called workflow fails, the parent job fails, triggering standard failure handling (cancel_on_failure, depends_on).

Example: Parallel processing

Callable workflow:

id: process-dataset

trigger:
callable: true

inputs:
dataset_name:
type: string
required: true

jobs:
process:
steps:
- name: Download
download:
source: drive://datasets/raw/${{ .inputs.dataset_name }}/
path: ./data/
- name: Process
run: python process.py --input ./data/
- name: Upload
upload:
path: ./data/processed/
destination: drive://datasets/processed/${{ .inputs.dataset_name }}/

Caller (parallel processing):

id: multi-dataset-processing

cancel_on_failure: false

jobs:
process-users:
workflow:
name: process-dataset
inputs:
dataset_name: "users"

process-transactions:
workflow:
name: process-dataset
inputs:
dataset_name: "transactions"

train:
depends_on: [process-users, process-transactions]
steps:
- name: Train model
run: python train.py --data drive://datasets/processed/

Feedback